在哪些情况下我们使用属性 trainable_variables 而不是 trainable_weights ,反之亦然 TF2 中 tf.keras.Model 的属性? [英] In which cases we use the attribute trainable_variables over trainable_weights and vice-versa of a tf.keras.Model in TF2?
问题描述
我正在研究如何在 TF 2 中进行迁移学习,我在 this来自 Tensorflow 的教程 他们使用属性 trainable_variables
来引用模型的可训练变量,但在这个 keras 文档中的其他教程 他们使用 tf.keras.Model
的属性 trainable_weights
.
I was studying how to do transfer learning in TF 2 and I saw that at this tutorial from Tensorflow they use the attribute trainable_variables
to reference the trainable variables of a model but in this other tutorial from the keras documentation they use the attribute trainable_weights
of a tf.keras.Model
.
我用一个简单的模型检查了这两个属性,它们给了我相同的结果.
I checked both attributes with a simple model, and they give me the same result.
import tensorflow as tf
print(tf.__version__)
inputs = tf.keras.layers.Input(shape=[64, 64, 3])
x = tf.keras.layers.Conv2D(128, kernel_size=3, strides=2)(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.LeakyReLU(alpha=0.2)(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
print("\nTrainable weights")
vars_model = [var.name for var in model.trainable_weights]
print(*vars_model, sep="\n")
print("\nTrainable variables")
vars_model = [var.name for var in model.trainable_variables]
print(*vars_model, sep="\n")
输出:
2.2.0
Trainable weights
conv2d/kernel:0
conv2d/bias:0
batch_normalization/gamma:0
batch_normalization/beta:0
Trainable variables
conv2d/kernel:0
conv2d/bias:0
batch_normalization/gamma:0
batch_normalization/beta:0
我检查了这个 other issue 并试图遵循这两个属性的定义:trainable_variables
似乎是 trainable_weights 似乎是 这里和这里,因为td.keras.Model
也继承自network.Network
.前者似乎正在返回 trainable_weights
变量.但是,我不确定在所有"情况下都会发生这种情况.
I checked this other issue and tried to follow the definition of both attributes: trainable_variables
seems to be here and trainable_weights
seems to be here and here, since td.keras.Model
also inherits from network.Network
. The former seems to be returning the trainable_weights
variable. But, I am not sure that this happens in "all" cases.
那么,我想知道在哪些情况下我们使用 trainable_variables
而不是 trainable_weights
,反之亦然?为什么?
So, I am wondering in which cases we use trainable_variables
over trainable_weights
and vice-versa? and why?
推荐答案
它们在 Tensorflow 2.2.0 版本中是相同的.如果你进入基础层的源代码 - tf.keras.layers.Layer(点击在GitHub上查看源代码"),可以找到下面的作业.这是所有层都继承的类.
They both are same in Tensorflow version 2.2.0. If you go into the source code of base layer - tf.keras.layers.Layer (click on "View source on GitHub"), you can find the below assignment. This is the class from which all layers inherit.
@property
@doc_controls.do_not_generate_docs
def trainable_variables(self):
return self.trainable_weights
@property
@doc_controls.do_not_generate_docs
def non_trainable_variables(self):
return self.non_trainable_weights
希望这能回答您的问题.快乐学习.
Hope this answers your question. Happy Learning.
这篇关于在哪些情况下我们使用属性 trainable_variables 而不是 trainable_weights ,反之亦然 TF2 中 tf.keras.Model 的属性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!