PyTorch VAE无法转换为onnx [英] PyTorch VAE fails conversion to onnx

查看:342
本文介绍了PyTorch VAE无法转换为onnx的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将PyTorch VAE转换为onnx,但是我得到:torch.onnx.symbolic.normal does not exist

I'm trying to convert a PyTorch VAE to onnx, but I'm getting: torch.onnx.symbolic.normal does not exist

问题似乎源自reparametrize()函数:

    def reparametrize(self, mu, logvar):
        std = logvar.mul(0.5).exp_()
        if self.have_cuda:
             eps = torch.normal(torch.zeros(std.size()),torch.ones(std.size())).cuda()
        else:
           eps = torch.normal(torch.zeros(std.size()),torch.ones(std.size()))
        return eps.mul(std).add_(mu)

我也尝试过:

eps = torch.cuda.FloatTensor(std.size()).normal_()

产生错误:

    Schema not found for node. File a bug report.
    Node: %173 : Float(1, 20) = aten::normal(%169, %170, %171, %172), scope: VAE 
    Input types:Float(1, 20), float, float, Generator

eps = torch.randn(std.size()).cuda()

产生错误:

    builtins.TypeError: i_(): incompatible function arguments. The following argument types are supported:
    1. (self: torch._C.Node, arg0: str, arg1: int) -> torch._C.Node
    Invoked with: %137 : Tensor = onnx::RandomNormal(), scope: VAE, 'shape', 133 defined in (%133 : int[] = prim::ListConstruct(%128, %132), scope: VAE) (occurred when translating randn)

我正在使用cuda.

任何想法都值得赞赏.也许我需要对onnx采用不同的z/latent方法?

Any thoughts appreciated. Perhaps I need to approach the z/latent differently for onnx?

注意:逐步执行,我可以看到它正在为torch.randn()找到RandomNormal(),这应该是正确的.但是那时我真的没有访问参数的权限,那么我该如何解决呢?

NOTE: Stepping through, I can see that it's finding RandomNormal() for torch.randn(), which should be correct. But I don't really have access to the arguments at that point, so how can I fix it?

推荐答案

总之,下面的代码可能有效. (至少在我的环境中,它没有错误).

In very short, the code bellow may work. (at least in my environment, it worked w/o errors).

似乎.size()运算符可能返回变量,而不是常量,因此它会导致onnx编译错误. (更改为使用.size()时,我遇到了相同的错误)

It seems that .size() operator might return variable, not constant, so it causes error for onnx compilation. (I got the same error when changed to use .size())

import torch
import torch.utils.data
from torch import nn
from torch.nn import functional as F



IN_DIMS = 28 * 28
BATCH_SIZE = 10
FEATURE_DIM = 20

class VAE(nn.Module):
    def __init__(self):
        super(VAE, self).__init__()

        self.fc1 = nn.Linear(784, 400)
        self.fc21 = nn.Linear(400, FEATURE_DIM)
        self.fc22 = nn.Linear(400, FEATURE_DIM)
        self.fc3 = nn.Linear(FEATURE_DIM, 400)
        self.fc4 = nn.Linear(400, 784)

    def encode(self, x):
        h1 = F.relu(self.fc1(x))
        return self.fc21(h1), self.fc22(h1)

    def reparameterize(self, mu, logvar):
        std = torch.exp(0.5*logvar)
        eps = torch.randn(BATCH_SIZE, FEATURE_DIM, device='cuda')
        return eps.mul(std).add_(mu)

    def decode(self, z):
        h3 = F.relu(self.fc3(z))
        return torch.sigmoid(self.fc4(h3))

    def forward(self, x):
        mu, logvar = self.encode(x)
        z = self.reparameterize(mu, logvar)
        recon_x = self.decode(z)

        return recon_x

model = VAE().cuda()

dummy_input = torch.randn(BATCH_SIZE, IN_DIMS, device='cuda')
torch.onnx.export(model, dummy_input, "vae.onnx", verbose=True)

这篇关于PyTorch VAE无法转换为onnx的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆