Caffe预测同一类,无论图像 [英] Caffe predicts same class regardless of image
问题描述
我修改了MNIST示例,当我使用我的3图像类,它返回的精度为91%。但是,当我修改 C ++示例时,使用 deploy prototxt 文件和标签文件,并尝试在某些图像上测试它返回第二类(1个圆)的预测,概率为1.0,无论什么图像我给它 - 即使它是在训练集中使用的图像。我已经尝试了十几个图像,它一直只是预测一个类。
I modified the MNIST example and when I train it with my 3 image classes it returns an accuracy of 91%. However, when I modify the C++ example with a deploy prototxt file and labels file, and try to test it on some images it returns a prediction of the second class (1 circle) with a probability of 1.0 no matter what image I give it - even if it's images that were used in the training set. I've tried a dozen images and it consistently just predicts the one class.
为了澄清一些事情,在C ++示例中,我修改了我已经缩放图像被预测像在训练阶段缩放的图像:
To clarify things, in the C++ example I modified I did scale the image to be predicted just like the images were scaled in the training stage:
img.convertTo(img, CV_32FC1);
img = img * 0.00390625;
如果这是正确的事情,那么它让我想知道我是否做错了其中输出层会计算我的 deploy_arch.prototxt 文件中的概率。 / p>
If that was the right thing to do, then it makes me wonder if I've done something wrong with the output layers that calculate probability in my deploy_arch.prototxt file.
推荐答案
我想你已经忘记在分类时缩放输入图像,可以在train_test.prototxt的第11行看到文件。你应该在你的C ++代码中的某个地方乘以该因子,或者使用Caffe层来缩放输入(查看ELTWISE或POWER层)。
I think you have forgotten to scale the input image during classification time, as can be seen in line 11 of the train_test.prototxt file. You should probably multiply by that factor somewhere in your C++ code, or alternatively use a Caffe layer to scale the input (look into ELTWISE or POWER layers for this).
strong> EDIT:
在评论中的对话之后,结果是在classification.cpp文件中错误地减去了图像平均值,未在原始培训/测试管道中减去。
After a conversation in the comments, it turned out that the image mean was mistakenly being subtracted in the classification.cpp file whereas it was not being subtracted in the original training/testing pipeline.
这篇关于Caffe预测同一类,无论图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!