张量流:tf.nn.dropout 和 tf.layers.dropout 有什么区别 [英] tensorflow: what's the difference between tf.nn.dropout and tf.layers.dropout

查看:43
本文介绍了张量流:tf.nn.dropout 和 tf.layers.dropout 有什么区别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我很困惑是使用 tf.nn.dropout 还是 tf.layers.dropout.

I'm quite confused about whether to use tf.nn.dropout or tf.layers.dropout.

许多 MNIST CNN 示例似乎使用 tf.nn.droput,将 keep_prop 作为参数之一.

many MNIST CNN examples seems to use tf.nn.droput, with keep_prop as one of params.

但是它与 tf.layers.dropout 有什么不同?tf.layers.dropout 中的rate"参数是否与 tf.nn.dropout 类似?

but how is it different with tf.layers.dropout? is the "rate" params in tf.layers.dropout similar to tf.nn.dropout?

或者一般来说,是tf.nn.dropout和tf.layers.dropout之间的区别适用于所有其他类似的情况,比如tf.nn和tf.layers中的类似功能.

Or generally speaking, is the difference between tf.nn.dropout and tf.layers.dropout applies to all other similar situations, like similar functions in tf.nn and tf.layers.

推荐答案

快速浏览tensorflow/python/layers/core.pytensorflow/python/ops/nn_ops.py揭示 tf.layers.dropouttf.nn.dropout 的包装器.

A quick glance through tensorflow/python/layers/core.py and tensorflow/python/ops/nn_ops.py reveals that tf.layers.dropout is a wrapper for tf.nn.dropout.

两个函数的唯一区别是:

The only differences in the two functions are:

  1. tf.nn.dropout 有参数 keep_prob:每个元素被保留的概率"
    tf.layers.dropout具有参数 rate:辍学率"
    因此,keep_prob = 1 - rate 定义为 这里
  2. tf.layers.dropouttraining 参数:是否在训练模式(应用 dropout)或推理模式(返回输入不变)下返回输出."
  1. The tf.nn.dropout has parameter keep_prob: "Probability that each element is kept"
    tf.layers.dropout has parameter rate: "The dropout rate"
    Thus, keep_prob = 1 - rate as defined here
  2. The tf.layers.dropout has training parameter: "Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched)."

这篇关于张量流:tf.nn.dropout 和 tf.layers.dropout 有什么区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆