大小不匹配,m1:[3584 x 28],m2:[784 x 128],位于/pytorch/aten/src/TH/generic/THTensorMath.cpp:940 [英] size mismatch, m1: [3584 x 28], m2: [784 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:940

查看:492
本文介绍了大小不匹配,m1:[3584 x 28],m2:[784 x 128],位于/pytorch/aten/src/TH/generic/THTensorMath.cpp:940的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经执行了以下代码,并在最底端显示了错误.我想知道如何解决这个问题.谢谢

I have executed the following code and getting the error shown at extreme bottom. I would like to know how to resolve this. thanks

import torch.nn as nn
import torch.nn.functional as F
from torch import optim

from torchvision import transforms
_tasks = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

from torchvision.datasets import MNIST
mnist = MNIST("data", download=True, train=True, transform=_tasks)

from torch.utils.data import DataLoader
from torch.utils.data.sampler import SubsetRandomSampler

create training and validation split
split = int(0.8 * len(mnist))

index_list = list(range(len(mnist)))
train_idx, valid_idx = index_list[:split], index_list[split:]

create sampler objects using SubsetRandomSampler
tr_sampler = SubsetRandomSampler(train_idx)
val_sampler = SubsetRandomSampler(valid_idx)

create iterator objects for train and valid datasets
trainloader = DataLoader(mnist, batch_size=256, sampler=tr_sampler)
validloader = DataLoader(mnist, batch_size=256, sampler=val_sampler)

创建执行模型

class Model(nn.Module):
  def init(self):
    super().init()
    self.hidden = nn.Linear(784, 128)
    self.output = nn.Linear(128, 10)

  def forward(self, x):
    x = self.hidden(x)
    x = F.sigmoid(x)
    x = self.output(x)
    return x

model = Model()

loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay= 1e-6, momentum = 0.9, nesterov = True)

for epoch in range(1, 11): ## run the model for 10 epochs
  train_loss, valid_loss = [], []

  #training part
  model.train()
  for data, target in trainloader:
    optimizer.zero_grad()

    #1. forward propagation
    output = model(data)

    #2. loss calculation
    loss = loss_function(output, target)

    #3. backward propagation
    loss.backward()

    #4. weight optimization
    optimizer.step()

    train_loss.append(loss.item())

  # evaluation part
  model.eval()
  for data, target in validloader:
     output = model(data)
     loss = loss_function(output, target)
     valid_loss.append(loss.item())

执行此操作,我收到以下错误:

(p)中的

RuntimeError Traceback(最近一次通话最后一次) ----> 1输出=模型(数据)2 3 ## 2.损失计算4损失= loss_function(输出,目标)5

RuntimeError Traceback (most recent call last) in () ----> 1 output = model(data) 2 3 ## 2. loss calculation 4 loss = loss_function(output, target) 5

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py在 call(self,* input,** kwargs)487 result = self._slow_forward(* input, ** kwargs)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs)

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py在 线性的(输入,重量,偏置)1352 ret = torch.addmm(torch.jit._unwrap_optional(bias),input,weight.t())1353 别的: -> 1354输出= input.matmul(weight.t())1355如果没有偏置,则无:1356输出+ = torch.jit._unwrap_optional(bias)

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1352 ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t()) 1353 else: -> 1354 output = input.matmul(weight.t()) 1355 if bias is not None: 1356 output += torch.jit._unwrap_optional(bias)

RuntimeError:大小不匹配,m1:[3584 x 28],m2:[784 x 128],位于 /pytorch/aten/src/TH/generic/THTensorMath.cpp:940

RuntimeError: size mismatch, m1: [3584 x 28], m2: [784 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:940

推荐答案

您输入的MNIST数据具有与[B, C, H, W]相对应的形状[256, 1, 28, 28].您需要先将输入图像展平为一个784长的矢量,然后再将其馈送到线性层Linear(784, 128),以使输入变为与[B, N]相对应的[256, 784],其中N为1x28x28,即您的图像尺寸.可以按照以下步骤进行操作:

Your input MNIST data has shape [256, 1, 28, 28] corresponding to [B, C, H, W]. You need to flatten the input images into a single 784 long vector before feeding it to the Linear layer Linear(784, 128) such that the input becomes [256, 784] corresponding to [B, N], where N is 1x28x28, your image size. This can be done as follows:

for data, target in trainloader:

        # Flatten MNIST images into a 784 long vector
        data = data.view(data.shape[0], -1)

        optimizer.zero_grad()
        ...

在验证循环中需要执行相同的操作.

The same is needed to be done in the validation loop.

这篇关于大小不匹配,m1:[3584 x 28],m2:[784 x 128],位于/pytorch/aten/src/TH/generic/THTensorMath.cpp:940的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆