Pytorch ValueError:优化器得到一个空的参数列表 [英] Pytorch ValueError: optimizer got an empty parameter list
问题描述
尝试创建神经网络并使用 Pytorch 对其进行优化时,我得到 p><块引用>
ValueError: 优化器得到一个空的参数列表
这是代码.
将 torch.nn 导入为 nn导入 torch.nn.functional 作为 F从 os.path 导入目录名从操作系统导入 getcwd从 os.path 导入 realpath从 sys 导入 argv类 NetActor(nn.Module):def __init__(self, args, state_vector_size, action_vector_size, hidden_layer_size_list):super(NetActor, self).__init__()self.args = argsself.state_vector_size = state_vector_sizeself.action_vector_size = action_vector_sizeself.layer_sizes = hidden_layer_size_listself.layer_sizes.append(action_vector_size)self.nn_layers = []self._create_net()def_create_net(self):prev_layer_size = self.state_vector_size对于 self.layer_sizes 中的 next_layer_size:next_layer = nn.Linear(prev_layer_size, next_layer_size)prev_layer_size = next_layer_sizeself.nn_layers.append(next_layer)def forward(self, torch_state):激活 = torch_state对于我,枚举中的层(self.nn_layers):如果我 != len(self.nn_layers)-1:激活 = F.relu(层(激活))别的:激活 = 层(激活)概率 = F.softmax(激活数,dim=-1)退货问题
然后是电话
self.actor_nn = NetActor(self.args, 4, 2, [128])self.actor_optimizer = optim.Adam(self.actor_nn.parameters(), lr=args.learning_rate)
给出了非常有用的错误
<块引用>ValueError: 优化器得到一个空的参数列表
我发现很难理解网络定义中究竟是什么使网络具有参数.
我正在关注并扩展我在 Pytorch 的教程代码中找到的示例.
我真的分不清我的代码和他们的代码之间的区别,这让我觉得它没有要优化的参数.
如何让我的网络拥有链接示例中的参数?
您的 NetActor
不直接存储任何 nn.Parameter
.此外,它最终在 forward
中使用的所有其他层都存储为一个 简单 列表是 self.nn_layers
.
如果您想让 self.actor_nn.parameters()
知道存储在列表 self.nn_layers
中的项目可能包含可训练的参数,您应该使用 容器.
具体来说,使 self.nn_layers
成为 nn.ModuleList
而不是简单的列表应该可以解决您的问题:
self.nn_layers = nn.ModuleList()
When trying to create a neural network and optimize it using Pytorch, I am getting
ValueError: optimizer got an empty parameter list
Here is the code.
import torch.nn as nn
import torch.nn.functional as F
from os.path import dirname
from os import getcwd
from os.path import realpath
from sys import argv
class NetActor(nn.Module):
def __init__(self, args, state_vector_size, action_vector_size, hidden_layer_size_list):
super(NetActor, self).__init__()
self.args = args
self.state_vector_size = state_vector_size
self.action_vector_size = action_vector_size
self.layer_sizes = hidden_layer_size_list
self.layer_sizes.append(action_vector_size)
self.nn_layers = []
self._create_net()
def _create_net(self):
prev_layer_size = self.state_vector_size
for next_layer_size in self.layer_sizes:
next_layer = nn.Linear(prev_layer_size, next_layer_size)
prev_layer_size = next_layer_size
self.nn_layers.append(next_layer)
def forward(self, torch_state):
activations = torch_state
for i,layer in enumerate(self.nn_layers):
if i != len(self.nn_layers)-1:
activations = F.relu(layer(activations))
else:
activations = layer(activations)
probs = F.softmax(activations, dim=-1)
return probs
and then the call
self.actor_nn = NetActor(self.args, 4, 2, [128])
self.actor_optimizer = optim.Adam(self.actor_nn.parameters(), lr=args.learning_rate)
gives the very informative error
ValueError: optimizer got an empty parameter list
I find it hard to understand what exactly in the network's definition makes the network have parameters.
I am following and expanding the example I found in Pytorch's tutorial code.
I can't really tell the difference between my code and theirs that makes mine think it has no parameters to optimize.
How to make my network have parameters like the linked example?
Your NetActor
does not directly store any nn.Parameter
. Moreover, all other layers it eventually uses in forward
are stored as a simple list is self.nn_layers
.
If you want self.actor_nn.parameters()
to know that the items stored in the list self.nn_layers
may contain trainable parameters, you should work with containers.
Specifically, making self.nn_layers
to be a nn.ModuleList
instead of a simple list should solve your problem:
self.nn_layers = nn.ModuleList()
这篇关于Pytorch ValueError:优化器得到一个空的参数列表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!