"RuntimeError:对于4维权重32 3 3期望4维输入,但是却得到了大小为[3、224、224]的3维输入."? [英] "RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead"?

查看:4617
本文介绍了"RuntimeError:对于4维权重32 3 3期望4维输入,但是却得到了大小为[3、224、224]的3维输入."?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用预先训练的模型.这是发生问题的地方

I am trying to use a pre-trained model. Here's where the problem occurs

模型不是应该采用简单的彩色图像吗?为什么要输入4维?

Isn't the model supposed to take in a simple colored image? Why is it expecting a 4-dimensional input?

RuntimeError                              Traceback (most recent call last)
<ipython-input-51-d7abe3ef1355> in <module>()
     33 
     34 # Forward pass the data through the model
---> 35 output = model(data)
     36 init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
     37 

5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    336                             _pair(0), self.dilation, self.groups)
    337         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338                         self.padding, self.dilation, self.groups)
    339 
    340 

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 3, but got 3-dimensional input of size [3, 224, 224] instead

哪里

inception = models.inception_v3()
model = inception.to(device)

推荐答案

As Usman Ali wrote in his comment, pytorch (and most other DL toolboxes) expects a batch of images as an input. Thus you need to call

output = model(data[None, ...])  

在输入data中插入单例批量"尺寸.

Inserting a singleton "batch" dimension to your input data.

还请注意,您使用的模型可能期望输入尺寸(3x229x229)不同,而不是3x224x224.

Please also note that the model you are using might expect a different input size (3x229x229) and not 3x224x224.

这篇关于"RuntimeError:对于4维权重32 3 3期望4维输入,但是却得到了大小为[3、224、224]的3维输入."?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆