如何在Tensorflow中显示隐藏层输出 [英] How to show hidden layer outputs in Tensorflow
问题描述
将模型与其存储的protobuf版本进行比较时,我的输出存在差异(通过此转换脚本).为了调试,我分别比较了这两层.对于权重和测试序列期间的实际图层输出,我会收到相同的输出,因此我不确定如何访问隐藏的图层.
I'm having differences of the outputs when comparing a model with its stored protobuf version (via this conversion script). For debugging I'm comparing both layers respectively. For the weights and the actual layer output during a test sequence I receive the identical outputs, thus I'm not sure how to access the hidden layers.
这是我加载图层的方式
input = graph.get_tensor_by_name("lstm_1_input_1:0")
layer1 = graph.get_tensor_by_name("lstm_1_1/kernel:0")
layer2 = graph.get_tensor_by_name("lstm_1_1/recurrent_kernel:0")
layer3 = graph.get_tensor_by_name("time_distributed_1_1/kernel:0")
output = graph.get_tensor_by_name("activation_1_1/div:0")
这就是我想展示各个元素的方式.
Here is the way what I thought to show the respective elements.
显示权重:
with tf.Session(graph=graph) as sess:
print sess.run(layer1)
print sess.run(layer2)
print sess.run(layer3)
显示输出:
with tf.Session(graph=graph) as sess:
y_out, l1_out, l2_out, l3_out = sess.run([output, layer1, layer2, layer3], feed_dict={input: X_test})
使用此代码sess.run(layer1) == sess.run(layer1,feed_dict={input:X_test})
,该代码不正确.
With this code sess.run(layer1) == sess.run(layer1,feed_dict={input:X_test})
which shouldn't be true.
有人可以帮我吗?
推荐答案
运行sess.run(layer1)
时,您要告诉tensorflow计算layer1
张量的值,即...
When you run sess.run(layer1)
, you're telling tensorflow to compute the value of layer1
tensor, which is ...
layer1 = graph.get_tensor_by_name("lstm_1_1/kernel:0")
...根据您的定义.请注意,LSTM内核是weights变量.它不取决于input
,这就是为什么您使用sess.run(layer1, feed_dict={input:X_test})
得到相同结果的原因.如果提供了输入,这并不像tensorflow正在计算输出-它正在计算指定的张量,在这种情况下为layer1
.
... according to your definition. Note that LSTM kernel is the weights variable. It does not depend on the input
, that's why you get the same result with sess.run(layer1, feed_dict={input:X_test})
. It's not like tensorflow is computing the output if the input is provided -- it's computing the specified tensor(s), in this case layer1
.
什么时候input
重要呢?有依赖关系时.例如:
When does input
matter then? When there is a dependency on it. For example:
-
sess.run(output)
.如果没有input
或任何允许计算input
的张量,它将根本无法工作. - 优化操作,例如
tf.train.AdapOptimizer(...).minimize(loss)
.运行此操作将更改layer1
,但也需要input
来完成.
sess.run(output)
. It simply won't work without aninput
, or any tensor that will allow to compute theinput
.- The optimization op, such as
tf.train.AdapOptimizer(...).minimize(loss)
. Running this op will changelayer1
, but it also needs theinput
to do so.
这篇关于如何在Tensorflow中显示隐藏层输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!