如何解决 PyTorch 中由于大小不匹配导致的运行时错误? [英] How to resolve runtime error due to size mismatch in PyTorch?
问题描述
我正在尝试使用 PyTorch
实现一个简单的自动编码器.我的数据集由 256 x 256 x 3 图像组成.我已经构建了一个 torch.utils.data.dataloader.DataLoader
对象,该对象将图像存储为张量.当我运行自动编码器时,出现运行时错误:
I am trying to implement a simple autoencoder using PyTorch
. My dataset consists of 256 x 256 x 3 images. I have built a torch.utils.data.dataloader.DataLoader
object which has the image stored as tensor. When I run the autoencoder, I get a runtime error:
尺寸不匹配,m1:[76800 x 256],m2:[784 x 128] at/Users/soumith/minicondabuild3/conda-bld/pytorch_1518371252923/work/torch/lib/TH/generic/THTensorMath.c:1434
size mismatch, m1: [76800 x 256], m2: [784 x 128] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1518371252923/work/torch/lib/TH/generic/THTensorMath.c:1434
这些是我的超参数:
batch_size=100,
learning_rate = 1e-3,
num_epochs = 100
以下是我的自动编码器的架构:
Following is the architecture of my auto-encoder:
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(3*256*256, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(True),
nn.Linear(64, 12),
nn.ReLU(True),
nn.Linear(12, 3))
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(True),
nn.Linear(12, 64),
nn.ReLU(True),
nn.Linear(64, 128),
nn.Linear(128, 3*256*256),
nn.ReLU())
def forward(self, x):
x = self.encoder(x)
#x = self.decoder(x)
return x
这是我用来运行模型的代码:
This is the code I used to run the model:
for epoch in range(num_epochs):
for data in dataloader:
img = data['image']
img = Variable(img)
# ===================forward=====================
output = model(img)
loss = criterion(output, img)
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
# ===================log========================
print('epoch [{}/{}], loss:{:.4f}'
.format(epoch+1, num_epochs, loss.data[0]))
if epoch % 10 == 0:
pic = show_img(output.cpu().data)
save_image(pic, './dc_img/image_{}.jpg'.format(epoch))
推荐答案
如果你的输入是3 x 256 x 256
,那么你需要把它转换成B x N
> 让它通过线性层: nn.Linear(3*256*256, 128)
其中 B
是 batch_size
和 >N
是线性层输入大小.如果您一次提供一张图像,您可以将形状为 3 x 256 x 256
的输入张量转换为 1 x (3*256*256)
,如下所示.
If your input is 3 x 256 x 256
, then you need to convert it to B x N
to pass it through the linear layer: nn.Linear(3*256*256, 128)
where B
is the batch_size
and N
is the linear layer input size.
If you are giving one image at a time, you can convert your input tensor of shape 3 x 256 x 256
to 1 x (3*256*256)
as follows.
img = img.view(1, -1) # converts [3 x 256 x 256] to 1 x 196608
output = model(img)
这篇关于如何解决 PyTorch 中由于大小不匹配导致的运行时错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!