在会话运行之间保留张量值 [英] Preserving tensor values between session runs
问题描述
考虑以下示例:
import tensorflow as tf
import math
import numpy as np
INPUTS = 10
HIDDEN_1 = 20
BATCH_SIZE = 3
def create_graph(inputs):
with tf.name_scope('h1'):
weights = tf.Variable(
tf.truncated_normal([INPUTS, HIDDEN_1],
stddev=1.0 / math.sqrt(float(INPUTS))),
name='weights')
biases = tf.Variable(tf.zeros([HIDDEN_1]),
name='biases')
state = tf.Variable(tf.zeros([HIDDEN_1]), name='inner_state')
state = tf.Print(state, [state], message=" this is state before: ")
state = 0.9*state + 0.1*( tf.matmul(inputs, weights) + biases )
state = tf.Print(state, [state], message=" this is state after: ")
output = tf.nn.relu(state)
return output
def data_iter():
while True:
idxs = np.random.rand(BATCH_SIZE, INPUTS)
yield idxs
with tf.Graph().as_default():
inputs = tf.placeholder(tf.float32, shape=(BATCH_SIZE, INPUTS))
output = create_graph(inputs)
sess = tf.Session()
# Run the Op to initialize the variables.
init = tf.initialize_all_variables()
sess.run(init)
iter_ = data_iter()
for i in xrange(0, 2):
print ("iteration: ",i)
input_data = iter_.next()
out = sess.run(output, feed_dict={ inputs: input_data})
我希望让张量 state
保留其中间值并在每次迭代时缓慢改变.但是,我看到的是,在每个 sess.run
调用中,状态从零值开始:
I was hoping to have the tensor state
to preserve its intermediate and to change slowly on each iteration. However, What I am seeing is that on each sess.run
invocation, the state begins on zero values:
('iteration: ', 0)
I tensorflow/core/kernels/logging_ops.cc:79] this is state before: [0 0 0...]
I tensorflow/core/kernels/logging_ops.cc:79] this is state after: [0.007762237 0.044753391 0.049343754...]
('iteration: ', 1)
I tensorflow/core/kernels/logging_ops.cc:79] this is state before: [0 0 0...]
I tensorflow/core/kernels/logging_ops.cc:79] this is state after: [0.040079735 0.074709542 0.078258425...]
如果您能澄清如何解决此行为,我将不胜感激
I would appreciate any clarification how to address this behavior
编辑
在注释 tf.Print
行并用
state = state.assign(0.9*state + 0.1*( tf.matmul(inputs, weights) + biases ))
我收到这些错误:
Traceback (most recent call last):
File "cycles_in_graphs.py", line 33, in <module>
output = create_graph(inputs)
File "cycles_in_graphs.py", line 21, in create_graph
state = state.assign(0.9*state + 0.1*( tf.matmul(inputs, weights) + biases ))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 453, in assign
return state_ops.assign(self._variable, value, use_locking=use_locking)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 40, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2156, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1612, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/state_ops.py", line 197, in _AssignShape
return [op.inputs[0].get_shape().merge_with(op.inputs[1].get_shape())]
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 554, in merge_with
(self, other))
ValueError: Shapes (20,) and (3, 20) are not compatible
推荐答案
当你写 state = 0.9 * state + 0.1 * (tf.matmul(inputs, weights) + biases)
时,你做不更新变量 state
的值.
您只计算0.9 * state + 0.1 * ...
的值,但变量的值保持不变.
When you write state = 0.9 * state + 0.1 * (tf.matmul(inputs, weights) + biases)
, you do not update the value of the variable state
.
You only compute the value of 0.9 * state + 0.1 * ...
, but the value of the Variable stays the same.
要更新您的 tf.Variable
,您应该在变量 state
上使用函数 assign
或 assign_add
:
To update your tf.Variable
, you should use the function assign
or assign_add
on your variable state
:
state = state.assign(0.9 * state + 0.1 * (tf.matmul(inputs, weights) + biases))
所有内容都在 TensorFlow 变量教程.
这篇关于在会话运行之间保留张量值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!