Caffe迭代损失与火车净损失 [英] Caffe Iteration loss versus Train Net loss

查看:115
本文介绍了Caffe迭代损失与火车净损失的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用caffe训练CNN,其底部具有欧几里得损失层,而我的Solver.prototxt文件配置为每100次迭代显示一次.我看到这样的东西,

I'm using caffe to train a CNN with a Euclidean loss layer at the bottom, and my solver.prototxt file configured to display every 100 iterations. I see something like this,

Iteration 4400, loss = 0
I0805 11:10:16.976716 1936085760 solver.cpp:229]     Train net output #0: loss = 2.92436 (* 1 = 2.92436 loss)

我对迭代损失和训练净损失之间的区别感到困惑.通常,迭代损失非常小(大约为0),并且Train的净输出损失要大一些.有人可以澄清一下吗?

I'm confused as to what the difference between the Iteration loss and Train net loss is. Usually the iteration loss is very small (around 0) and the Train net output loss is a bit larger. Can somebody please clarify?

推荐答案

Evan Shelhamer已在 https://groups.google.com/forum/#!topic/caffe-users/WEhQ92s9Vus .

Evan Shelhamer already gave his answer on https://groups.google.com/forum/#!topic/caffe-users/WEhQ92s9Vus.

正如他指出的那样,net output #k结果是该特定迭代/批次的网络输出,而Iteration T, loss = X输出是根据average_loss字段在各个迭代之间进行平滑处理的.

As he pointe out, The net output #k result is the output of the net for that particular iteration / batch while the Iteration T, loss = X output is smoothed across iterations according to the average_loss field.

这篇关于Caffe迭代损失与火车净损失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆