BatchNorm动量惯例PyTorch [英] BatchNorm momentum convention PyTorch

查看:337
本文介绍了BatchNorm动量惯例PyTorch的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

batchnorm动量惯例(默认为0.1)与其他库一样正确,例如Tensorflow似乎默认情况下通常为0.9或0.99?还是我们只是使用了不同的约定?

Is the batchnorm momentum convention (default=0.1) correct as in other libraries e.g. Tensorflow it seems to usually be 0.9 or 0.99 by default? Or maybe we are just using a different convention?

推荐答案

似乎pytorch中的参数化约定与tensorflow中的不同,因此pytorch中的0.1等效于tensorflow中的0.9.

It seems that the parametrization convention is different in pytorch than in tensorflow, so that 0.1 in pytorch is equivalent to 0.9 in tensorflow.

更准确地说:

在Tensorflow中:

In Tensorflow:

running_mean = decay*running_mean + (1-decay)*new_value

在PyTorch中:

running_mean = (1-decay)*running_mean + decay*new_value

这意味着PyTorch中的decay值等于Tensorflow中的(1-decay)值.

This means that a value of decay in PyTorch is equivalent to a value of (1-decay) in Tensorflow.

这篇关于BatchNorm动量惯例PyTorch的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆