在Caffe中训练期间更改输入数据层 [英] Changing the input data layer during training in Caffe

查看:88
本文介绍了在Caffe中训练期间更改输入数据层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以即时更改 ImageData 层或 MemoryData 层的输入源?



我试图在每个时代重新整理数据,但是我同时拥有图像和其他一些非图像功能,这些功能我想在网络的稍后阶段进行连接。我找不到一种可靠的方式来同时保留图像和我的其他数据,从而保持两者的对齐。



因此,我正在考虑重新每次生成 imagelist.txt 以及非图像数据(在内存中),并将新文件附加到 ImageData 层并使用新数据初始化 MemoryDataLayer



如何确保使用以下方法重新初始化网络新的文本文件,而无需重新启动培训过程。 (我希望网络在同一阶段,动量等条件下继续训练,只从新文件而不是最初编译的文件开始读取图像文件)。

 图层{
名称: imgdata
类型: ImageData
顶部: imgdata
顶部: dlabel
transform_param {
#在此处转换参数
}
image_data_param {
来源: imagelist.txt的路径 ##此文件在n次迭代后更改
batch_size:XX
new_height:XXX
new_width:XXX
}
}

同样,我希望能够将重新组合的数据复制到 MemoryData 层中。我可以在培训中期期间调用 Net.set_input_arrays 吗?

  layers { 
名称: data
类型:MEMORY_DATA
top: data
top: label
memory_data_param {
batch_size:XX
频道:X
高度:XXX
宽度:XXX
}


解决方案

您可以在 Python层的帮助下解决您的问题,如评论。可以在caffe 此处中找到使用python层的示例



在python脚本中,您可以编写代码以通过保留数据的对齐方式对两个数据进行混洗。


Is it possible to change the input source of ImageData layer or a MemoryData layer on the fly?

I am trying to shuffle the data every epoch but I have both image and some other non-image features that I want to concatenate at a later stage in the network. I could not find a reliable way to shuffle both the image and my other data in a way that preserves the alignment of the two.

So, I am thinking of re-generating the imagelist.txt as well as nonimage data (in Memory) every epoch and attach the new file to the ImageData layer and initialize MemoryDataLayer with the new data.

How can I make sure that I re-initialize the network with the new text file without restarting the training process. (I want the network to continue training at the same stage, momentum etc.., only start reading the image files from new file instead of one that was originally compiled).

layer {
name: "imgdata"
type: "ImageData"
top: "imgdata"
top: "dlabel"
transform_param {
  # Transform param here
}
image_data_param {
source: "path to imagelist.txt" ## This file changes after n iterartions
batch_size: XX
new_height: XXX
new_width: XXX
}
}

And Similarly, I want to be able to copy the re-shuffled data into MemoryData layer. Can I call Net.set_input_arrays during middle of training?

layers {
  name: "data"
  type: MEMORY_DATA
  top: "data"
  top: "label"
  memory_data_param {
  batch_size: XX
  channels: X
  height: XXX
  width: XXX
  }

解决方案

Your problem could be solved with the help of Python layers, as suggested in comments. An example of the usage of python layer can be found within caffe here.

Within the python script you could write the code to shuffle both the data by preserving their alignments.

这篇关于在Caffe中训练期间更改输入数据层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆