我可以在不应用输入的情况下获得张量相对于输入的梯度吗? [英] Can I get the gradient of a tensor with respect to the input without applying the input?
问题描述
例如,我需要计算 cross_entropy
相对于 x
的梯度,但我需要对梯度函数应用另一个值.
For example, I need to compute the gradient of the cross_entropy
with respect to x
, but I need to apply another value to the gradient function.
即:
f'(x)|x = x_t
我认为 tf.gradients()
函数只会给出 x = x
处的梯度.那么 tensorflow 是否提供任何这种功能?
I think tf.gradients()
function will only give the gradient at x = x
.
So does tensorflow provide any of this feature?
推荐答案
的结果tf.gradients
是一个 张量(一般的张量列表),而不是一个 float 值.在某种程度上,这个张量是一个函数:它可以在任何点进行计算.客户端只需要提供所需的输入值.
The result of tf.gradients
is a tensor (list of tensors in general), not a float value. In a way, this tensor is a function: it can be evaluated in any point. The client only needs to feed the desired input value.
示例:
features = 3
n_samples = 10
hidden = 1
X = tf.placeholder(dtype=tf.float32, shape=[n_samples, features])
Y = tf.placeholder(dtype=tf.float32, shape=[n_samples])
W = tf.Variable(np.ones([features, hidden]), dtype=tf.float32, name="weight")
b = tf.Variable(np.ones([hidden]), dtype=tf.float32, name="bias")
pred = tf.add(tf.matmul(X, W), b)
cost = tf.reduce_mean(tf.pow(pred - Y, 2))
dc_dw, dc_db = tf.gradients(cost, [W, b])
session.run(tf.global_variables_initializer())
# Let's compute `dc_dw` at `ones` matrix.
print(dc_dw.eval(feed_dict={X: np.ones([n_samples, features]),
Y: np.ones([n_samples])}))
这篇关于我可以在不应用输入的情况下获得张量相对于输入的梯度吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!