Keras中的BatchNormalization [英] BatchNormalization in Keras

查看:794
本文介绍了Keras中的BatchNormalization的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何在keras BatchNormalization中更新移动平均值和移动方差?

How do I update moving mean and moving variance in keras BatchNormalization?

我在tensorflow文档中找到了这个,但我不知道将train_op放在哪里或如何在keras模型中使用它:

I found this in tensorflow documentation, but I don't know where to put train_op or how to work it with keras models:

update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
        with tf.control_dependencies(update_ops):
            train_op = optimizer.minimize( loss )

我发现没有帖子说如何处理train_op以及是否可以在model.compile中使用它.

No posts I found say what to do with train_op and whether you can use it in model.compile.

推荐答案

如果使用BatchNormalization层,则无需手动更新移动平均值和方差. Keras负责在训练过程中更新这些参数,并在测试过程中保持固定(通过使用model.predictmodel.evaluate函数,与使用model.fit_generator和朋友的方法相同).

You do not need to manually update the moving mean and variances if you are using the BatchNormalization layer. Keras takes care of updating these parameters during training, and to keep them fixed during testing (by using the model.predict and model.evaluate functions, same as with model.fit_generator and friends).

Keras还跟踪学习阶段,因此在培训和验证/测试过程中会运行不同的代码路径.

Keras also keeps track of the learning phase so different codepaths run during training and validation/testing.

这篇关于Keras中的BatchNormalization的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆