使用C ++在CAFFE中设置输入层 [英] Setting input layer in CAFFE with C++
问题描述
我正在使用CAFFE编写C ++代码来预测单个(目前)图像。图片已经过预处理,格式为.png。我已经创建了一个Net对象并阅读了经过训练的模型。现在,我需要将.png图像用作输入层并调用net.Forward()-但是有人可以帮助我弄清楚如何设置输入层吗?
I'm writing C++ code using CAFFE to predict a single (for now) image. The image has already been preprocessed and is in .png format. I have created a Net object and read in the trained model. Now, I need to use the .png image as an input layer and call net.Forward() - but can someone help me figure out how to set the input layer?
我在网络上发现了一些示例,但是它们都不起作用,并且几乎所有示例都已弃用。根据: Berkeley的Net API ,不建议使用 ForwardPrefilled,而应使用 Forward( vector,float *)已过时。 API指出应该设置输入Blob,然后使用Forward()代替。这是有道理的,但是设置输入斑点部分并未扩展,我找不到如何做到这一点的良好C ++示例。
I found a few examples on the web, but none of them work, and almost all of them use deprecated functionality. According to: Berkeley's Net API, using "ForwardPrefilled" is deprecated, and using "Forward(vector, float*)" is deprecated. API indicates that one should "set input blobs, then use Forward() instead". That makes sense, but the "set input blobs" part is not expanded on, and I can't find a good C++ example on how to do that.
我不确定是否使用caffe :: Datum是否是正确的方法,但是我一直在玩这个游戏:
I'm not sure if using a caffe::Datum is the right way to go or not, but I've been playing with this:
float lossVal = 0.0;
caffe::Datum datum;
caffe::ReadImageToDatum("myImg.png", 1, imgDims[0], imgDims[1], &datum);
caffe::Blob< float > *imgBlob = new caffe::Blob< float >(1, datum.channels(), datum.height(), datum.width());
//How to get the image data into the blob, and the blob into the net as input layer???
const vector< caffe::Blob< float >* > &result = caffeNet.Forward(&lossVal);
我还是要遵循API的设置方向输入Blob,然后使用(未弃用的)caffeNet.Forward(& lossVal)来获取结果,而不是使用已弃用的东西。
Again, I'd like to follow the API's direction of setting the input blobs and then using the (non-deprecated) caffeNet.Forward(&lossVal) to get the result as opposed to making use of the deprecated stuff.
编辑:
根据下面的答案,我更新为包括以下内容:
Based on an answer below, I updated to include this:
caffe::MemoryDataLayer<unsigned char> *memory_data_layer = (caffe::MemoryDataLayer<unsigned char> *)caffeNet.layer_by_name("input").get();
vector< caffe::Datum > datumVec;
datumVec.push_back(datum);
memory_data_layer->AddDatumVector(datumVec);
但是现在对AddDatumVector的调用存在段错误。这是否与我的prototxt格式有关?这是我的原型的顶部:
but now the call to AddDatumVector is seg faulting.. I wonder if this is related to my prototxt format? here's the top of my prototxt:
name: "deploy"
input: "data"
input_shape {
dim: 1
dim: 3
dim: 100
dim: 100
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
这部分问题基于此讨论有关源字段在原型中很重要...
I base this part of the question on this discussion about a "source" field being important in the prototxt...
推荐答案
我的代码摘录位于此处我在其中使用Caffe的地方我的C ++代码。希望对您有所帮助。
Here is an excerpt from my code located here where I used Caffe in my C++ code. I hope this helps.
Net<float> caffe_test_net("models/sudoku/deploy.prototxt", caffe::TEST);
caffe_test_net.CopyTrainedLayersFrom("models/sudoku/sudoku_iter_10000.caffemodel");
// Get datum
Datum datum;
if (!ReadImageToDatum("examples/sudoku/cell.jpg", 1, 28, 28, false, &datum)) {
LOG(ERROR) << "Error during file reading";
}
// Get the blob
Blob<float>* blob = new Blob<float>(1, datum.channels(), datum.height(), datum.width());
// Get the blobproto
BlobProto blob_proto;
blob_proto.set_num(1);
blob_proto.set_channels(datum.channels());
blob_proto.set_height(datum.height());
blob_proto.set_width(datum.width());
int size_in_datum = std::max<int>(datum.data().size(),
datum.float_data_size());
for (int ii = 0; ii < size_in_datum; ++ii) {
blob_proto.add_data(0.);
}
const string& data = datum.data();
if (data.size() != 0) {
for (int ii = 0; ii < size_in_datum; ++ii) {
blob_proto.set_data(ii, blob_proto.data(ii) + (uint8_t)data[ii]);
}
}
// Set data into blob
blob->FromProto(blob_proto);
// Fill the vector
vector<Blob<float>*> bottom;
bottom.push_back(blob);
float type = 0.0;
const vector<Blob<float>*>& result = caffe_test_net.Forward(bottom, &type);
这篇关于使用C ++在CAFFE中设置输入层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!