是否有可能使可训练变量不可训练? [英] Is it possible to make a trainable variable not trainable?

查看:24
本文介绍了是否有可能使可训练变量不可训练?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在一个范围内创建了一个可训练变量.后来进入同一个范围,将范围设置为reuse_variables,并使用get_variable检索同一个变量.但是,我无法将变量的可训练属性设置为 False.我的 get_variable 行就像:

I created a trainable variable in a scope. Later, I entered the same scope, set the scope to reuse_variables, and used get_variable to retrieve the same variable. However, I cannot set the variable's trainable property to False. My get_variable line is like:

weight_var = tf.get_variable('weights', trainable = False)

但是变量'weights'仍然在tf.trainable_variables的输出中.

But the variable 'weights' is still in the output of tf.trainable_variables.

我可以使用 get_variable 将共享变量的 trainable 标志设置为 False 吗?

Can I set a shared variable's trainable flag to False by using get_variable?

我想这样做的原因是我试图在我的模型中重用从 VGG 网络预训练的低级过滤器,我想像以前一样构建图形,检索权重变量,并分配VGG 将值过滤到权重变量,然后在接下来的训练步骤中保持它们固定.

The reason I want to do this is that I'm trying to reuse the low-level filters pre-trained from VGG net in my model, and I want to build the graph like before, retrieve the weights variable, and assign VGG filter values to the weight variable, and then keep them fixed during the following training step.

推荐答案

在查看文档和代码后,我无法找到从 中删除变量的方法TRAINABLE_VARIABLES.

After looking at the documentation and the code, I was not able to find a way to remove a Variable from the TRAINABLE_VARIABLES.

  • 第一次调用tf.get_variable('weights', trainable=True)时,该变量被添加到TRAINABLE_VARIABLES的列表中.
  • 第二次调用 tf.get_variable('weights', trainable=False) 时,会得到相同的变量,但参数 trainable=False 没有效果,因为该变量已经存在于 TRAINABLE_VARIABLES 列表中(并且无法从那里删除它)
  • The first time tf.get_variable('weights', trainable=True) is called, the variable is added to the list of TRAINABLE_VARIABLES.
  • The second time you call tf.get_variable('weights', trainable=False), you get the same variable but the argument trainable=False has no effect as the variable is already present in the list of TRAINABLE_VARIABLES (and there is no way to remove it from there)

调用优化器的minimize方法时(参见doc.),您可以将 var_list=[...] 作为参数传递给您想要优化的变量.

When calling the minimize method of the optimizer (see doc.), you can pass a var_list=[...] as argument with the variables you want to optimizer.

比如,如果你想冻结除最后两层之外的所有VGG层,你可以在var_list中传递最后两层的权重.

For instance, if you want to freeze all the layers of VGG except the last two, you can pass the weights of the last two layers in var_list.

您可以使用 tf.train.Saver() 来保存变量并在以后恢复它们(参见 本教程).

You can use a tf.train.Saver() to save variables and restore them later (see this tutorial).

  • 首先,您使用所有可训练变量训练整个 VGG 模型.您可以通过调用 saver.save(sess, "/path/to/dir/model.ckpt") 将它们保存在检查点文件中.
  • 然后(在另一个文件中)您使用不可训练变量训练第二个版本.您加载之前使用 saver.restore(sess, "/path/to/dir/model.ckpt") 存储的变量.
  • First you train your entire VGG model with all trainable variables. You save them in a checkpoint file by calling saver.save(sess, "/path/to/dir/model.ckpt").
  • Then (in another file) you train the second version with non trainable variables. You load the variables previously stored with saver.restore(sess, "/path/to/dir/model.ckpt").

或者,您可以决定只保存检查点文件中的一些变量.请参阅文档了解更多信息.

Optionally, you can decide to save only some of the variables in your checkpoint file. See the doc for more info.

这篇关于是否有可能使可训练变量不可训练?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆