在 tf.layers.batch_normalization(training=?) 中使用 training=True 与否有什么区别 [英] what is the difference between using training=True or not in tf.layers.batch_normalization(training=?)

查看:131
本文介绍了在 tf.layers.batch_normalization(training=?) 中使用 training=True 与否有什么区别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最近,我做了一个关于使用的实验:tf.layers.batch_normalization(input, training=True)using tf.layers.batch_normalization(input),两者都两种情况都在训练期.

Recently, I made an experiment about using: tf.layers.batch_normalization(input, training=True) and using tf.layers.batch_normalization(input), both of two situations are all in training period.

但是发生了一些奇怪的事情.如果我使用:tf.layers.batch_normalization(input, training=True)tf.summary创建的tfevent文件大约有400MB,但是如果我使用tf.layers.batch_normalization(input),那个文件只有20MB左右,我不明白这是什么原因.

But some strange things happened. If I use: tf.layers.batch_normalization(input, training=True), the tfevent file created by tf.summary is about 400MB, but if I use tf.layers.batch_normalization(input), that file is just about 20MB, I can not understand the reason for that.

推荐答案

here 批量归一化在训练和测试期间有不同的行为:

As explained here batch normalization has distinct behaviors during training verus test time:

在测试阶段,您根据小批量统计数据对层激活进行标准化,而在测试阶段,您根据估计的总体统计数据进行标准化.

During testing phase, you normalize layer activations according to mini-batch statistics, while in testing you do it according to estimated population statistics.

此行为通过 training 参数控制,如 TensorFlow 的文档.

This behaviour is controlled through the training parameter, as explained in TensorFlow's documentation.

因此,在测试期间,存储的信息很少.不过,您不应该为了训练目的而设置它.

Thus, during testing, much little information is stored. You shouldn't set it that way for training purposes though.

希望这有帮助!

这篇关于在 tf.layers.batch_normalization(training=?) 中使用 training=True 与否有什么区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆