如何使用TFlearn中的ImageAugmentation在CNN中训练图像和数据的混合 [英] How to train mix of image and data in CNN using ImageAugmentation in TFlearn

查看:161
本文介绍了如何使用TFlearn中的ImageAugmentation在CNN中训练图像和数据的混合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在Tflearn-Tensorflow中使用图像(像素信息)和数据的混合来训练卷积神经网络。由于我的图像数量较少,因此需要使用图像增强功能来增加传递到网络的图像样本数量。但这意味着我只能将图像数据作为输入数据传递,必须在稍后的阶段(大概在完全连接的层之前)添加非图像数据。我无法解决该问题,因为当我调用model.fit({'input':)时,我只能告诉网络要使用什么数据,而且我无法传递两种类型的连接其中的data作为input_data直接调用图像增强。是否可以在中间阶段添加任何额外数据来进行级联,或者允许我使用ImageAugmentation和训练网络所需的非图像数据的任何其他替代方法?
我的代码,下面有一些注释。非常感谢。

I would like to train a convolutional neural network in Tflearn-Tensorflow using a mix of images (pixel info) and data. Because I have a short number of images, I need to use the Image Augmentation to increase the number of image samples that I pass to the network. But that means that I can only pass image data as input data, having to add the non-image data at a later stage, presumably before the fully connected layer. I can't work out how to do this, since it seems that I can only tell the network what data to use when I call model.fit({'input': ) and I can't pass the concatenation of both types of data there as input_data calls directly to the image augmentation. Is there any concatenation that I can do mid-stage to add the extra data or any other alternatives that allows me use ImageAugmentation and the non-image data that I need to train the network? My code with some comments below. Many thanks.

import tensorflow as tf
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression

#px_train:pixel data, data_train: additional data 
px_train, data_train, px_cv, data_cv, labels_train, labels_cv = prepare_data(path, filename)

img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle = 89.)
img_aug.add_random_blur(sigma_max=3.)
img_aug.add_random_flip_updown()
img_aug.add_random_90degrees_rotation(rotations = [0, 1, 2, 3])

#I can only pass image data here to apply data_augmentation 
convnet = input_data(shape = [None, 96, 96, 1], name = 'input', data_augmentation = img_aug)

convnet = conv_2d(convnet, 32, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

convnet = conv_2d(convnet, 64, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

convnet = tf.reshape(convnet, [-1, 24*24*64])    
#convnet = tf.concat((convnet, conv_feat), 1)
#If I concatenated data like above, where could I tell Tensorflow to assign the variable conv_feat to my 'data_train' values?

convnet = fully_connected(convnet, 1024, activation = 'relu')
convnet = dropout(convnet, 0.8)

convnet = fully_connected(convnet, 99, activation = 'softmax')
convnet = regression(convnet, optimizer = 'adam', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'labels')

model = tflearn.DNN(convnet)

#I can't add additional 'input' labels here to pass my 'data_train'. TF gives error.
model.fit({'input': np.array(px_train).reshape(-1, 96, 96, 1)}, {'labels': labels_train}, n_epoch = 50, validation_set = ({'input': np.array(px_cv).reshape(-1, 96, 96, 1)}, {'labels': labels_cv}), snapshot_step = 500, show_metric = True, run_id = 'Test')


推荐答案

如果您查看有关model.fit方法的文档:
http://tflearn.org/models/dnn/ 。要为model.fit提供多个输入,您只需要将它们作为列表传递即可,即model.fit([X1,X2],Y)。这样,X1被传递到您拥有的第一个input_data层,X2被传递到第二个input_data层。

If you look at the documentation for the model.fit method: http://tflearn.org/models/dnn/. To give multiple inputs to model.fit you just need to pass them as a list i.e. model.fit([X1, X2], Y). In this way X1 is passed to the first input_data layer you have and X2 is passed to the second input_data layer.

如果您希望将不同的层连接在一起,则可以采用看看Tflearn中的合并层: http://tflearn.org/layers/merge_ops/

If you are looking to concatenation of different layers you can take a look at the merge layer in Tflearn: http://tflearn.org/layers/merge_ops/

我认为以下代码应运行,但是您可能希望合并其中的图层

I think the following code should run, however you may want to merge you layers in a different way than I am doing it.

import tensorflow as tf
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from tflearn.layers.merge_ops import merge
from tflearn.data_augmentation import ImageAugmentation

img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle = 89.)
img_aug.add_random_blur(sigma_max=3.)
img_aug.add_random_flip_updown()
img_aug.add_random_90degrees_rotation(rotations = [0, 1, 2, 3])

convnet = input_data(shape = [None, 96, 96, 1], data_augmentation = img_aug)
convfeat = input_data(shape = [None, 120])

convnet = conv_2d(convnet, 32, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

convnet = conv_2d(convnet, 64, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

# To merge the layers they need to have same dimension
convnet = fully_connected(convnet, 120) 
convnet = merge([convnet, convfeat], 'concat')

convnet = fully_connected(convnet, 1024, activation = 'relu')
convnet = dropout(convnet, 0.8)

convnet = fully_connected(convnet, 99, activation = 'softmax')
convnet = regression(convnet, optimizer = 'adam', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'labels')

model = tflearn.DNN(convnet)

# Give multiple inputs as a list
model.fit([np.array(px_train).reshape(-1, 96, 96, 1), np.array(data_train).reshape(-1, 120)], 
           labels_train, 
           n_epoch = 50, 
           validation_set = ([np.array(px_cv).reshape(-1, 96, 96, 1), np.array(data_cv).reshape(-1, 120)], labels_cv), 
           snapshot_step = 500, 
           show_metric = True, 
           run_id = 'Test')

这篇关于如何使用TFlearn中的ImageAugmentation在CNN中训练图像和数据的混合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆