如何将MLP的神经网络从tensorflow翻译成pytorch [英] How to translate the neural network of MLP from tensorflow to pytorch

查看:27
本文介绍了如何将MLP的神经网络从tensorflow翻译成pytorch的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用Tensorflow"构建了一个 MLP 神经网络,如下所示:

I have built up an MLP neural network using 'Tensorflow', which is stated as follow:

model_mlp=Sequential()
model_mlp.add(Dense(units=35, input_dim=train_X.shape[1], kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=86, kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=86, kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=10, kernel_initializer='normal', activation='relu'))
model_mlp.add(Dense(units=1))

我想使用pytorch转换上面的MLP代码.怎么做?我尝试这样做:

I want to convert the above MLP code using pytorch. How to do it? I try to do it as follows:

    class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.fc1 = nn.Linear(train_X.shape[1],35)
        self.fc2 = nn.Linear(35, 86)
        self.fc3 = nn.Linear(86, 86)
        self.fc4 = nn.Linear(86, 10)
        self.fc5 = nn.Linear(10, 1)
    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = F.relu(self.fc4(x))
        x = self.fc5(x)
        return x
    def predict(self, x_test):
        x_test = torch.from_numpy(x_test).float()
        x_test = self.forward(x_test)
        return x_test.view(-1).data.numpy()
model = MLP()

我使用相同的数据集,但两个代码给出了两个不同的答案.用 Tensorflow 编写的代码总是比使用 Pytorch 编写的代码产生更好的结果.我想知道我在 pytorch 中的代码是否不正确.如果我在 PyTorch 中编写的代码是正确的,我想知道如何解释这些差异.我期待着任何答复.

I use the same dataset but the two codes give two different answers. Code written in Tensorflow always produce a much better results than using the code written in Pytorch. I wonder if my code in pytorch is not correct. In case my written code in PyTorch is correct, I wonder how to explain the differences. I am looking forward to any replies.

推荐答案

欢迎使用 pytorch!

Welcome to pytorch!

我猜问题出在您的网络初始化上.这就是我要做的:

I guess the problem is with the initialization of your network. That is how I would do it:

def init_weights(m):
    if type(m) == nn.Linear:
        torch.nn.init.xavier_normal(m.weight)  # initialize with xaver normal (called gorot in tensorflow)
        m.bias.data.fill_(0.01) # initialize bias with a constant

class MLP(nn.Module):
    def __init__(self, input_dim):
        super(MLP, self).__init__()
        self.mlp = nn.Sequential(nn.Linear(input_dim ,35), nn.ReLU(),
                                 nn.Linear(35, 86), nn.ReLU(),
                                 nn.Linear(86, 86), nn.ReLU(), 
                                 nn.Linear(86, 10), nn.ReLU(),
                                 nn.Linear(10, 1), nn.ReLU())

    def forward(self, x):
        y =self.mlp(x)
        return y

model = MLP(input_dim)
model.apply(init_weights)

optimizer = Adam(model.parameters())
loss_func = BCEWithLogistLoss()

# training loop

for data, label in dataloader:
    optimizer.zero_grad()
    
    pred = model(data)
    loss = loss_func(pred, lable)
    loss.backward()
    optimizer.step()
    

注意在pytorch中我们不调用model.forward(x),而是model(x).这是因为 nn.Module.__call__() 中应用了在向后传递中使用的钩子.

Notice that in pytorch we do not call model.forward(x), but model(x). That is because nn.Module applies hooks in .__call__() that are used in the backward pass.

你可以在这里查看权重初始化的文档:https://pytorch.org/docs/stable/nn.init.html

You can check the documentation of weight initialization here: https://pytorch.org/docs/stable/nn.init.html

这篇关于如何将MLP的神经网络从tensorflow翻译成pytorch的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆