在 tensorflow 中,常量和不可训练变量有什么区别? [英] In tensorflow, what is the difference between a constant and a non-trainable variable?
问题描述
根据这个答案,
tf.constant()
的值在内存中多次存储.
the value of a
tf.constant()
is stored multiple times in memory.
当您有一些不应改变值的大张量时,这为是使用张量流常数还是不可训练变量提供了一个实用的答案.
This provides a practical answer to whether to use a tensorflow constant or non-trainable variable when you have some big tensor that should not change value.
然而,我不清楚为什么两者都存在,以及为什么(以及在什么情况下)tf.constant
会在内存中复制.
However, it is not clear to me why both exist, and why (and under which circumstances) tf.constant
would be replicated in memory.
推荐答案
如果你这样做 W = tf.constant(embedding, name="W")
那么嵌入的值会存储两次-- 在 embedding
中的 numpy 端和 W
操作中的 TensorFlow 端.请注意,constant
值存储在未针对大型并行数据传输进行优化的 Graph
对象中(至少在 acac487a ),同时优化了Variable
值存储
If you do W = tf.constant(embedding, name="W")
then the value of the embedding is stored twice -- on the numpy side in embedding
and on the TensorFlow side in W
op. Note that constant
values are stored in Graph
object which is not optimized for large parallel data transfers (at least there were performance complaints before acac487a ), meanwhile Variable
value store is optimized
这篇关于在 tensorflow 中,常量和不可训练变量有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!