PyTorch:传递numpy数组进行权重初始化 [英] PyTorch: passing numpy array for weight initialization

查看:530
本文介绍了PyTorch:传递numpy数组进行权重初始化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想用np数组初始化RNN的参数.

I'd like to initialize the parameters of RNN with np arrays.

在下面的示例中,我想将w传递给rnn的参数.我知道pytorch提供了许多初始化方法,例如Xavier,uniform等,但是有没有办法通过传递numpy数组来初始化参数?

In the following example, I want to pass w to the parameters of rnn. I know pytorch provides many initialization methods like Xavier, uniform, etc., but is there way to initialize the parameters by passing numpy arrays?

import numpy as np
import torch as nn
rng = np.random.RandomState(313)
w = rng.randn(input_size, hidden_size).astype(np.float32)

rnn = nn.RNN(input_size, hidden_size, num_layers)

推荐答案

首先,请注意nn.RNN具有多个权重变量c.f. 文档:

First, let's note that nn.RNN has more than one weight variable, c.f. the documentation:

变量:

  • weight_ih_l[k] –第k层的可学习的输入隐藏权重,对于k = 0形状为(hidden_size * input_size).否则, 形状是(hidden_size * hidden_size)
  • weight_hh_l[k] –第k层的可学习的隐藏权重,形状为(hidden_size * hidden_size)
  • bias_ih_l[k] –第k层的可学习的输入隐藏偏差,形状为(hidden_size)
  • bias_hh_l[k] –第k层的可学习的隐藏偏差,形状为(hidden_size)
  • weight_ih_l[k] – the learnable input-hidden weights of the k-th layer, of shape (hidden_size * input_size) for k = 0. Otherwise, the shape is (hidden_size * hidden_size)
  • weight_hh_l[k] – the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size * hidden_size)
  • bias_ih_l[k] – the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
  • bias_hh_l[k] – the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)

现在,每个变量( Parameter 实例)是您的nn.RNN实例的属性.您可以通过两种方式访问​​和编辑它们,如下所示:

Now, each of these variables (Parameter instances) are attributes of your nn.RNN instance. You can access them, and edit them, two ways, as show below:

  • 解决方案1:按名称(rnn.weight_hh_lKrnn.weight_ih_lK等)访问所有RNN Parameter属性:
  • Solution 1: Accessing all the RNN Parameter attributes by name (rnn.weight_hh_lK, rnn.weight_ih_lK, etc.):
import torch
from torch import nn
import numpy as np

input_size, hidden_size, num_layers = 3, 4, 2
use_bias = True
rng = np.random.RandomState(313)

rnn = nn.RNN(input_size, hidden_size, num_layers, bias=use_bias)

def set_nn_parameter_data(layer, parameter_name, new_data):
    param = getattr(layer, parameter_name)
    param.data = new_data

for i in range(num_layers):
    weights_hh_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32)
    weights_ih_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32)
    set_nn_parameter_data(rnn, "weight_hh_l{}".format(i), 
                          torch.from_numpy(weights_hh_layer_i))
    set_nn_parameter_data(rnn, "weight_ih_l{}".format(i), 
                          torch.from_numpy(weights_ih_layer_i))

    if use_bias:
        bias_hh_layer_i = rng.randn(hidden_size).astype(np.float32)
        bias_ih_layer_i = rng.randn(hidden_size).astype(np.float32)
        set_nn_parameter_data(rnn, "bias_hh_l{}".format(i), 
                              torch.from_numpy(bias_hh_layer_i))
        set_nn_parameter_data(rnn, "bias_ih_l{}".format(i), 
                              torch.from_numpy(bias_ih_layer_i))

  • 解决方案2:通过rnn.all_weights列表属性访问所有RNN Parameter属性:
    • Solution 2: Accessing all the RNN Parameter attributes through rnn.all_weights list attribute:
    • import torch
      from torch import nn
      import numpy as np
      
      input_size, hidden_size, num_layers = 3, 4, 2
      use_bias = True
      rng = np.random.RandomState(313)
      
      rnn = nn.RNN(input_size, hidden_size, num_layers, bias=use_bias)
      
      for i in range(num_layers):
          weights_hh_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32)
          weights_ih_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32)
          rnn.all_weights[i][0].data = torch.from_numpy(weights_ih_layer_i)
          rnn.all_weights[i][1].data = torch.from_numpy(weights_hh_layer_i)
      
          if use_bias:
              bias_hh_layer_i = rng.randn(hidden_size).astype(np.float32)
              bias_ih_layer_i = rng.randn(hidden_size).astype(np.float32)
              rnn.all_weights[i][2].data = torch.from_numpy(bias_ih_layer_i)
              rnn.all_weights[i][3].data = torch.from_numpy(bias_hh_layer_i)
      

      这篇关于PyTorch:传递numpy数组进行权重初始化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆