Caffe-其他N次最后迭代的平均准确度 [英] Caffe - Average accuracy other N last iterations

查看:50
本文介绍了Caffe-其他N次最后迭代的平均准确度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Caffe训练神经网络.在solver.prototxt文件中,我可以设置average_loss来显示最近N次迭代的平均损失.是否可以使用其他值来做到这一点?

I'm training a neural network using Caffe. In the solver.prototxt file, I can set average_loss to print the loss averaged over last N iterations. Is it possible to do so using other values as well ?

例如,我编写了一个自定义的PythonLayer输出精度,我也希望显示最近N次迭代的平均精度.

For example, I wrote a custom PythonLayer outputting accuracy, and I would like to display the average accuracy over the last N iterations as well.

谢谢

编辑:这是日志. DEBUG行显示在每个图像上计算出的精度,每3个图像(average_loss: 3display: 3),精度显示为损失.我们看到只显示最后一个,我想要的是3)的平均值.

EDIT: here is the log. The DEBUG lines show the accuracy computed at each image, and every 3 images (average_loss: 3 and display: 3), the accuracy is displayed with the loss. We see that only the last one is displayed, what I want is the average of the 3).

2018-04-24 10:38:06,383 [DEBUG]: Accuracy: 0 / 524288 = 0.000000
I0424 10:38:07.517436 99964 solver.cpp:251] Iteration 0, loss = 1.84883e+06
I0424 10:38:07.517503 99964 solver.cpp:267]     Train net output #0: accuracy = 0
I0424 10:38:07.517521 99964 solver.cpp:267]     Train net output #1: loss = 1.84883e+06 (* 1 = 1.84883e+06 loss)
I0424 10:38:07.517536 99964 sgd_solver.cpp:106] Iteration 0, lr = 2e-12
I0424 10:38:07.524904 99964 solver.cpp:287]     Time: 2.44301s/1iters
2018-04-24 10:38:08,653 [DEBUG]: Accuracy: 28569 / 524288 = 0.054491
2018-04-24 10:38:11,010 [DEBUG]: Accuracy: 22219 / 524288 = 0.042379
2018-04-24 10:38:13,326 [DEBUG]: Accuracy: 168424 / 524288 = 0.321243
I0424 10:38:14.533329 99964 solver.cpp:251] Iteration 3, loss = 1.84855e+06
I0424 10:38:14.533406 99964 solver.cpp:267]     Train net output #0: accuracy = 0.321243
I0424 10:38:14.533426 99964 solver.cpp:267]     Train net output #1: loss = 1.84833e+06 (* 1 = 1.84833e+06 loss)
I0424 10:38:14.533440 99964 sgd_solver.cpp:106] Iteration 3, lr = 2e-12
I0424 10:38:14.534195 99964 solver.cpp:287]     Time: 7.01088s/3iters
2018-04-24 10:38:15,665 [DEBUG]: Accuracy: 219089 / 524288 = 0.417879
2018-04-24 10:38:17,943 [DEBUG]: Accuracy: 202896 / 524288 = 0.386993
2018-04-24 10:38:20,210 [DEBUG]: Accuracy: 0 / 524288 = 0.000000
I0424 10:38:21.393121 99964 solver.cpp:251] Iteration 6, loss = 1.84769e+06
I0424 10:38:21.393190 99964 solver.cpp:267]     Train net output #0: accuracy = 0
I0424 10:38:21.393210 99964 solver.cpp:267]     Train net output #1: loss = 1.84816e+06 (* 1 = 1.84816e+06 loss)
I0424 10:38:21.393224 99964 sgd_solver.cpp:106] Iteration 6, lr = 2e-12
I0424 10:38:21.393940 99964 solver.cpp:287]     Time: 6.85962s/3iters
2018-04-24 10:38:22,529 [DEBUG]: Accuracy: 161180 / 524288 = 0.307426
2018-04-24 10:38:24,801 [DEBUG]: Accuracy: 178021 / 524288 = 0.339548
2018-04-24 10:38:27,090 [DEBUG]: Accuracy: 208571 / 524288 = 0.397818
I0424 10:38:28.297776 99964 solver.cpp:251] Iteration 9, loss = 1.84482e+06
I0424 10:38:28.297843 99964 solver.cpp:267]     Train net output #0: accuracy = 0.397818
I0424 10:38:28.297863 99964 solver.cpp:267]     Train net output #1: loss = 1.84361e+06 (* 1 = 1.84361e+06 loss)
I0424 10:38:28.297878 99964 sgd_solver.cpp:106] Iteration 9, lr = 2e-12
I0424 10:38:28.298607 99964 solver.cpp:287]     Time: 6.9049s/3iters
I0424 10:38:28.331749 99964 solver.cpp:506] Snapshotting to binary proto file snapshot/train_iter_10.caffemodel
I0424 10:38:36.171842 99964 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot/train_iter_10.solverstate
I0424 10:38:43.068686 99964 solver.cpp:362] Optimization Done.

推荐答案

Caffe仅在average_loss迭代中取平均网的全局损耗(所有损耗层的加权总和),而仅报告最后一批的输出.所有其他输出Blob.

Caffe only averages over average_loss iteration the global loss of the net (the weighted sum of all loss layers) while reporting the output of only the last batch for all other output blobs.

因此,如果您希望Python图层报告几次迭代的平均精度,建议您将缓冲区SS存储为图层类的成员,并显示此汇总值.
另外,您可以在精度计算的基础上实现移动平均值",并将此值输出为顶部".

Therefore, if you want your Python layer to report accuracy averaged over several iterations, I suggest you store a buffer SS a member of your layer class and display this aggregated value.
Alternatively, you can implement a "moving average" on top of the accuracy calculation and output this value as a "top".

您可以使用python实现移动平均输出层". 该层可以采用任意数量的底部",并输出这些底部的移动平均值.

You can have a "moving average output layer" implemented in python. This layer can take any number of "bottoms" and output the moving average of these bottoms.

图层的Python代码:

Python code of layer:

import caffe
class MovingAverageLayer(caffe.Layer):
  def setup(self, bottom, top):
    assert len(bottom) == len(top), "layer must have same number of inputs and outputs"
    # average over how many iterations? read from param_str
    self.buf_size = int(self.param_str)
    # allocate a buffer for each "bottom"
    self.buf = [[] for _ in self.bottom]

  def reshape(self, bottom, top):
    # make sure inputs and outputs have the same size
    for i, b in enumerate(bottom):
      top[i].reshape(*b.shape)

  def forward(self, bottom, top):
    # put into buffers
    for i, b in enumerate(bottom):
      self.buf[i].append(b.data.copy())
      if len(self.buf[i]) > self.buf_size:
        self.buf[i].pop(0)
      # compute average
      a = 0
      for elem in self.buf[i]:
        a += elem
      top[i].data[...] = a / len(self.buf[i])

  def backward(self, top, propagate_down, bottom):
    # this layer does not back prop
    pass

如何在prototxt中使用此层:

How to use this layer in prototxt:

layer {
  name: "moving_ave"
  type: "Python"
  bottom: "accuracy"
  top: "av_accuracy"
  python_param {
    layer: "MovingAverageLayer"
    module: "path.to.module"
    param_str: "30"  # buf size 
  }
}

有关更多信息,请参见本教程.

See this tutorial for more information.

原始错误答案:
Caffe输出可记录任何净输出:损耗,准确性或任何其他斑点,显示为一层的"top"且未在任何其他层中用作底部".
因此,如果要查看由"Python"层计算的精度,只需确保没有其他层使用此精度作为输入.

Original incorrect answer:
Caffe outputs to log whatever the net outputs: loss, accuracy or any other blob that appears as "top" of a layer and is not used as a "bottom" in any other layer.
Therefore, if you want to see accuracy computed by a "Python" layer, simply make sure no other layer uses this accuracy as an input.

这篇关于Caffe-其他N次最后迭代的平均准确度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆