张量流:tf.nn.dropout 和 tf.layers.dropout 有什么区别 [英] tensorflow: what's the difference between tf.nn.dropout and tf.layers.dropout
问题描述
我很困惑是使用 tf.nn.dropout 还是 tf.layers.dropout.
I'm quite confused about whether to use tf.nn.dropout or tf.layers.dropout.
许多 MNIST CNN 示例似乎使用 tf.nn.droput,将 keep_prop 作为参数之一.
many MNIST CNN examples seems to use tf.nn.droput, with keep_prop as one of params.
但是它与 tf.layers.dropout 有什么不同?tf.layers.dropout 中的rate"参数是否与 tf.nn.dropout 类似?
but how is it different with tf.layers.dropout? is the "rate" params in tf.layers.dropout similar to tf.nn.dropout?
或者一般来说,是tf.nn.dropout和tf.layers.dropout之间的区别适用于所有其他类似的情况,比如tf.nn和tf.layers中的类似功能.
Or generally speaking, is the difference between tf.nn.dropout and tf.layers.dropout applies to all other similar situations, like similar functions in tf.nn and tf.layers.
推荐答案
快速浏览tensorflow/python/layers/core.py 和 tensorflow/python/ops/nn_ops.py揭示 tf.layers.dropout
是 tf.nn.dropout
的包装器.
A quick glance through
tensorflow/python/layers/core.py and tensorflow/python/ops/nn_ops.py
reveals that tf.layers.dropout
is a wrapper for tf.nn.dropout
.
两个函数的唯一区别是:
The only differences in the two functions are:
tf.nn.dropout
有参数keep_prob
:每个元素被保留的概率"
tf.layers.dropout
具有参数rate
:辍学率"
因此,keep_prob = 1 - rate
定义为 这里tf.layers.dropout
有training
参数:是否在训练模式(应用 dropout)或推理模式(返回输入不变)下返回输出."
- The
tf.nn.dropout
has parameterkeep_prob
: "Probability that each element is kept"
tf.layers.dropout
has parameterrate
: "The dropout rate"
Thus,keep_prob = 1 - rate
as defined here - The
tf.layers.dropout
hastraining
parameter: "Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched)."
这篇关于张量流:tf.nn.dropout 和 tf.layers.dropout 有什么区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!