在Python中嵌入层:如何在Torchsummary中正确使用? [英] Embedding Layer in Python: How to Use Correctly with Torchsummary?

查看:143
本文介绍了在Python中嵌入层:如何在Torchsummary中正确使用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个最低限度的工作/可复制的示例:

This is a minimally working/reproducible example:

import torch
import torch.nn as nn
from torchsummary import summary

class Network(nn.Module): 
    def __init__(self, channels_img, features_d, num_classes, img_size): 
        super(Network, self).__init__()
        self.img_size = img_size
        self.disc = nn.Conv2d(
            in_channels = channels_img + 1, 
            out_channels = features_d, 
            kernel_size = (4,4)
        )

        # ConditionalGan: 
        self.embed = nn.Embedding(
            num_embeddings = num_classes, 
            embedding_dim = img_size * img_size
        )

   def forward(self, x, labels): 
        embedding = self.embed(labels).view(labels.shape[0], 1, self.img_size, self.img_size)
        x = torch.cat([x, embedding], dim = 1)
        return self.disc(x) 
    
# device: 
device = torch.device("cpu")

# hyperparameter: 
batch_size = 64

# Initialize model: 
model = Network(
    channels_img = 1, 
    features_d = 16, 
    num_classes = 10, 
    img_size = 28).to(device) 

# Print model summary: 
summary(
    model, 
    input_size = [(1, 28, 28), (1, 28, 28)], # MNIST
    batch_size = batch_size
)

我得到的错误消息是(对于带有 summary(...)的行):

The error message I get is (for the line with summary(...)):

参数#1'indices'的预期张量具有标量类型Long;但是却得到了torch.cuda.FloatTensor(在检查嵌入参数时)

我在此帖子中看到了,该 .to(torch.int64)应该会提供帮助,但老实说,我不知道在哪里编写它.

I saw in this post, that .to(torch.int64) is supposed to help, but I honestly don't know where to write it.

谢谢!

推荐答案

问题出在这里:

self.embed(labels)...

嵌入层是离散索引和连续值之间的映射,如对您来说,那10个数字应该是整数.

An embedding layer is kind of a mapping between discrete indices and continuous values, as stated here. That is, its inputs should be integers and it will give you back floats. In your case, for example, you are embedding class labels of the MNIST which range from 0 to 9, to a contiuum (for some reason that I don't know as i'm not familiar with GANs :)). But in short, that embedding layer will give a transformation of 10 -> 784 for you and those 10 numbers should be integers, PyTorch says.

整数类型的奇特名称是"long",因此您需要确保 self.embed 中所输入内容的数据类型是该类型.有一些方法可以做到这一点:

A fancy name for an integer type is "long", so you need to make sure the data type of what goes into self.embed is of that type. There are some ways to do that:

self.embed(labels.long())

self.embed(labels.to(torch.long))

self.embed(labels.to(torch.int64))

长数据类型实际上是一个64位整数(您可能会在此处),所以所有这些都有效.

Long datatype is really an 64 bit integer (you may see here), so all these work.

这篇关于在Python中嵌入层:如何在Torchsummary中正确使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆