Tensorflow 中正确的批量归一化功能是什么? [英] What is right batch normalization function in Tensorflow?

查看:48
本文介绍了Tensorflow 中正确的批量归一化功能是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 tensorflow 1.4 中,我发现了两个进行批量归一化的函数,它们看起来一样:

In tensorflow 1.4, I found two functions that do batch normalization and they look same:

  1. tf.layers.batch_normalization(链接)
  2. tf.contrib.layers.batch_norm (链接)

我应该使用哪个函数?哪个更稳定?

Which function should I use? Which one is more stable?

推荐答案

只是添加到列表中,还有几种方法可以在 tensorflow 中进行批处理规范:

Just to add to the list, there're several more ways to do batch-norm in tensorflow:

  • tf.nn.batch_normalization是一个低级操作.调用者负责自己处理 meanvariance 张量.
  • tf.nn.fused_batch_norm是另一个低级操作,类似于前一个操作.不同之处在于它针对 4D 输入张量进行了优化,这是卷积神经网络中的常见情况.tf.nn.batch_normalization 接受任何等级大于 1 的张量.
  • tf.layers.batch_normalization是对先前操作的高级包装.最大的区别在于它负责创建和管理运行均值和方差张量,并在可能的情况下调用快速融合操作.通常,这应该是您的默认选择.
  • tf.contrib.layers.batch_norm 是批规范的早期实现,在它升级到核心 API(即,tf.layers)之前.不推荐使用它,因为它可能会在未来的版本中被删除.
  • tf.nn.batch_norm_with_global_normalization是另一个已弃用的操作.目前,将调用委托给 tf.nn.batch_normalization,但将来可能会被删除.
  • 最后,还有 Keras 层 keras.layers.BatchNormalization,在 tensorflow 后端调用 tf.nn.batch_normalization 的情况下.
  • tf.nn.batch_normalization is a low-level op. The caller is responsible to handle mean and variance tensors themselves.
  • tf.nn.fused_batch_norm is another low-level op, similar to the previous one. The difference is that it's optimized for 4D input tensors, which is the usual case in convolutional neural networks. tf.nn.batch_normalization accepts tensors of any rank greater than 1.
  • tf.layers.batch_normalization is a high-level wrapper over the previous ops. The biggest difference is that it takes care of creating and managing the running mean and variance tensors, and calls a fast fused op when possible. Usually, this should be the default choice for you.
  • tf.contrib.layers.batch_norm is the early implementation of batch norm, before it's graduated to the core API (i.e., tf.layers). The use of it is not recommended because it may be dropped in the future releases.
  • tf.nn.batch_norm_with_global_normalization is another deprecated op. Currently, delegates the call to tf.nn.batch_normalization, but likely to be dropped in the future.
  • Finally, there's also Keras layer keras.layers.BatchNormalization, which in case of tensorflow backend invokes tf.nn.batch_normalization.

这篇关于Tensorflow 中正确的批量归一化功能是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆