在Tensorflow模型中添加低层 [英] Add low layers in a Tensorflow model
问题描述
尝试开发一些转移学习算法,我使用了一些训练有素的神经网络并添加了层次.我正在使用Tensorflow和python.
Trying to develop some transfert learning algorithm, I use some trained neural networks and add layers. I am using Tensorflow and python.
在Tensorflow中使用现有图形似乎很常见:导入图形(例如使用metaGraphs),然后通过添加节点设置新的高层.例如,我在此处找到了这段代码:
It seems quite common to use existing graphs in Tensorflow: you import the graph, for example using metaGraphs, then you set new high layers by adding nodes. For example, I found this code here :
vgg_saver = tf.train.import_meta_graph(dir + '/vgg/results/vgg-16.meta')
# Access the graph
vgg_graph = tf.get_default_graph()
# Retrieve VGG inputs
self.x_plh = vgg_graph.get_tensor_by_name('input:0')
# Choose some node
output_conv =vgg_graph.get_tensor_by_name('conv1_2:0')
# Build further operations
output_conv_shape = output_conv.get_shape().as_list()
W1 = tf.get_variable('W1', shape=[1, 1, output_conv_shape[3], 32],initializer=tf.random_normal_initializer(stddev=1e-1))
b1 = tf.get_variable('b1', shape=[32], initializer=tf.constant_initializer(0.1))
z1 = tf.nn.conv2d(output_conv, W1, strides=[1, 1, 1, 1], padding='SAME') + b1
a = tf.nn.relu(z1)
然后在培训中,您将使用您的图层以及下面的所有图层.您还可以冻结一些图层,在会话期间导入训练有素的变量,等等.
Then in the training, you would use your layers plus all those below. You can also freeze some layers, import trained variables during the session, etc.
但是,在我的方法中,我需要在输入和第一层之间添加新的低层,并使用我的层以及上面的层.因此,我不能只在图的底部添加节点:我需要在输入之后立即插入节点.
However, in my approach I need to add new low layers between the input and the first layer, and use my layers plus the ones above. Therefore I can't just add nodes at the bottom of the graph : I need to insert nodes right after the input.
直到现在,我还没有找到使用tensorflow进行该操作的简便方法.你有什么主意吗?还是不可能?
Until now I have found no convenient way to do that with tensorflow. Have you any idea ? Or is it just impossible ?
谢谢.
推荐答案
您不能在图形的现有图层之间插入图层,但可以在导入图形的同时进行一些重新布线.正如Pietro Tortella指出的那样,> Tensorflow:方法替换计算图中的节点?应该可以.这是一个示例:
You can't insert layers between existing layers of a graph, but you can import a graph with some rewiring along the way. As Pietro Tortella pointed out, the approach in Tensorflow: How to replace a node in a calculation graph? should work. Here is an example:
import tensorflow as tf
with tf.Graph().as_default() as g1:
input1 = tf.placeholder(dtype=tf.float32, name="input_1")
l1 = tf.multiply(input1, tf.constant(2.0), name="mult_1")
l2 = tf.multiply(l1, tf.constant(3.0), name="mult_2")
g1_def = g1.as_graph_def()
with tf.Graph().as_default() as new_g:
new_input = tf.placeholder(dtype=tf.float32, name="new_input")
op_to_insert = tf.add(new_input, tf.constant(4.0), name="inserted_op")
mult_2, = tf.import_graph_def(g1_def, input_map={"input_1": op_to_insert},
return_elements=["mult_2"])
The original graph looks like this and the imported graph looks like this.
如果要使用tf.train.import_meta_graph
,您仍然可以通过
If you want to use tf.train.import_meta_graph
, you can still pass in the
input_map={"input_1": op_to_insert}
k它将被传递到import_graph_def.
kwarg. It will get passed down to import_graph_def.
这篇关于在Tensorflow模型中添加低层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!