如何从 train.prototxt 创建 caffe.deploy [英] How to create caffe.deploy from train.prototxt

查看:16
本文介绍了如何从 train.prototxt 创建 caffe.deploy的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我的train.prototxt.这是我的 deploy.prototxt.

当我想加载我的部署文件时,出现此错误:

<块引用>

文件./python/caffe/classifier.py",第 29 行,在 __init__ 中in_ = self.inputs[0]IndexError:列表索引超出范围

所以,我删除了数据层:

<块引用>

F1117 23:16:09.485153 21910 insert_splits.cpp:35] 未知的底部 blob 'data'(层 'conv1',底部索引 0)*** 检查失败堆栈跟踪:***

然后,我从 conv1 层中删除了 bottom: "data".

之后,我收到此错误:

<块引用>

F1117 23:17:15.363919 21935 insert_splits.cpp:35] 未知的底部 blob 'label'(层 'loss',底部索引 1)*** 检查失败堆栈跟踪:***

我从损失层中删除了 bottom: "label" .我收到了这个错误:

<块引用>

I1117 23:19:11.171021 21962 layer_factory.hpp:76] 创建层 conv1I1117 23:19:11.171036 21962 net.cpp:110] 创建层 conv1I1117 23:19:11.171041 21962 net.cpp:433] conv1 ->转数1F1117 23:19:11.171061 21962 layer.hpp:379] 检查失败:MinBottomBlobs() <= bottom.size() (1 vs. 0) 卷积层至少需要 1 个底部 blob(s) 作为输入.*** 检查失败堆栈跟踪:***

我应该怎么做才能修复它并创建我的部署文件?

解决方案

train"prototxt 和deploy"prototxt 之间有两个主要区别:

1.输入:虽然训练数据固定为预处理的训练数据集(lmdb/HDF5 等),但部署网络需要它以更随机"的方式处理其他输入.
因此,第一个更改是删除输入层(在 TRAIN 和 TEST 阶段推送数据"和标签"的层).要替换输入层,您需要添加以下声明:

输入:数据"输入形状:{暗淡:1暗淡:3暗淡:224暗淡:224}

这个声明不提供网络的实际数据,但它告诉网络期望什么形状,允许caffe预先分配必要的资源.

2.损失: 训练 prototxt 中的最顶层定义了训练的损失函数.这通常涉及基本事实标签.部署网络时,您将无法再访问这些标签.因此,损失层应转换为预测"输出.例如,SoftmaxWithLoss"层应转换为简单的Softmax"层,输出类概率而不是对数似然损失.其他一些损失层已经将预测作为输入,因此只需删除它们即可.

更新:见这个教程了解更多信息.

This is my train.prototxt. And this is my deploy.prototxt.

When I want to load my deploy file I get this error:

File "./python/caffe/classifier.py", line 29, in __init__  
in_ = self.inputs[0]  
IndexError: list index out of range  

So, I removed the data layer:

F1117 23:16:09.485153 21910 insert_splits.cpp:35] Unknown bottom blob 'data' (layer 'conv1', bottom index 0)
*** Check failure stack trace: ***

Than, I removed bottom: "data" from conv1 layer.

After it, I got this error:

F1117 23:17:15.363919 21935 insert_splits.cpp:35] Unknown bottom blob 'label' (layer 'loss', bottom index 1)
*** Check failure stack trace: ***

I removed bottom: "label" from loss layer. And I got this error:

I1117 23:19:11.171021 21962 layer_factory.hpp:76] Creating layer conv1
I1117 23:19:11.171036 21962 net.cpp:110] Creating Layer conv1
I1117 23:19:11.171041 21962 net.cpp:433] conv1 -> conv1
F1117 23:19:11.171061 21962 layer.hpp:379] Check failed: MinBottomBlobs() <= bottom.size() (1 vs. 0) Convolution Layer takes at least 1 bottom blob(s) as input.
*** Check failure stack trace: ***

What should I do to fix it and create my deploy file?

解决方案

There are two main differences between a "train" prototxt and a "deploy" one:

1. Inputs: While for training data is fixed to a pre-processed training dataset (lmdb/HDF5 etc.), deploying the net require it to process other inputs in a more "random" fashion.
Therefore, the first change is to remove the input layers (layers that push "data" and "labels" during TRAIN and TEST phases). To replace the input layers you need to add the following declaration:

input: "data"
input_shape: { dim:1 dim:3 dim:224 dim:224 }

This declaration does not provide the actual data for the net, but it tells the net what shape to expect, allowing caffe to pre-allocate necessary resources.

2. Loss: the top most layers in a training prototxt define the loss function for the training. This usually involve the ground truth labels. When deploying the net, you no longer have access to these labels. Thus loss layers should be converted to "prediction" outputs. For example, a "SoftmaxWithLoss" layer should be converted to a simple "Softmax" layer that outputs class probability instead of log-likelihood loss. Some other loss layers already have predictions as inputs, thus it is sufficient just to remove them.

Update: see this tutorial for more information.

这篇关于如何从 train.prototxt 创建 caffe.deploy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆