“运行时错误:4 维权重 32 3 3 的预期 4 维输入,但得到大小为 [3, 224, 224] 的 3 维输入"? [英] "RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead"?
问题描述
我正在尝试使用预先训练的模型.问题就在这里
I am trying to use a pre-trained model. Here's where the problem occurs
模型不是应该接收简单的彩色图像吗?为什么它期望一个 4 维输入?
Isn't the model supposed to take in a simple colored image? Why is it expecting a 4-dimensional input?
RuntimeError Traceback (most recent call last)
<ipython-input-51-d7abe3ef1355> in <module>()
33
34 # Forward pass the data through the model
---> 35 output = model(data)
36 init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
37
5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
336 _pair(0), self.dilation, self.groups)
337 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338 self.padding, self.dilation, self.groups)
339
340
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead
哪里
inception = models.inception_v3()
model = inception.to(device)
推荐答案
As Usman Ali 在他的评论中写道,pytorch(和大多数其他 DL 工具箱)期望 batch 图像作为输入.因此你需要调用
As Usman Ali wrote in his comment, pytorch (and most other DL toolboxes) expects a batch of images as an input. Thus you need to call
output = model(data[None, ...])
在您的输入数据
中插入一个单一的批量"维度.
Inserting a singleton "batch" dimension to your input data
.
另请注意,您使用的模型可能需要不同的输入尺寸 (3x229x229) 而不是 3x224x224.
Please also note that the model you are using might expect a different input size (3x229x229) and not 3x224x224.
这篇关于“运行时错误:4 维权重 32 3 3 的预期 4 维输入,但得到大小为 [3, 224, 224] 的 3 维输入"?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!