如何在 Tensorflow 2.0 中使用 gradient_override_map? [英] How to use gradient_override_map in Tensorflow 2.0?
问题描述
我正在尝试将 gradient_override_map
与 Tensorflow 2.0 一起使用.文档中有一个 示例,我也将在此处用作示例.
I'm trying to use gradient_override_map
with Tensorflow 2.0. There is an example in the documentation, which I will use as the example here as well.
在 2.0 中,GradientTape
可用于计算梯度如下:
In 2.0, GradientTape
can be used to compute gradients as follows:
import tensorflow as tf
print(tf.version.VERSION) # 2.0.0-alpha0
x = tf.Variable(5.0)
with tf.GradientTape() as tape:
s_1 = tf.square(x)
print(tape.gradient(s_1, x))
还有 tf.custom_gradient
装饰器,可用于为 new 函数定义渐变(同样,使用 文档中的示例):
There is also the tf.custom_gradient
decorator, which can be used to define the gradient for a new function (again, using the example from the docs):
import tensorflow as tf
print(tf.version.VERSION) # 2.0.0-alpha
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
x = tf.Variable(100.)
with tf.GradientTape() as tape:
y = log1pexp(x)
print(tape.gradient(y, x))
但是,我想替换tf.square
等标准函数的渐变.我尝试使用以下代码:
However, I would like to replace the gradient for standard functions such as tf.square
. I tried to use the following code:
@tf.RegisterGradient("CustomSquare")
def _custom_square_grad(op, grad):
return tf.constant(0)
with tf.Graph().as_default() as g:
x = tf.Variable(5.0)
with g.gradient_override_map({"Square": "CustomSquare"}):
with tf.GradientTape() as tape:
s_2 = tf.square(x, name="Square")
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
print(sess.run(tape.gradient(s_2, x)))
但是,有两个问题:梯度替换似乎不起作用(它被评估为 10.0
而不是 0.0
),我需要求助于 session.run()
来执行图表.有没有办法在原生"TensorFlow 2.0 中实现这一点?
However, there are two issues: The gradient replacement does not seem to work (it is evaluated to 10.0
instead of 0.0
) and I need to resort to session.run()
to execute the graph. Is there a way to achieve this in "native" TensorFlow 2.0?
在 TensorFlow 1.12.0 中,以下生成所需的输出:
In TensorFlow 1.12.0, the following produces the desired output:
import tensorflow as tf
print(tf.__version__) # 1.12.0
@tf.RegisterGradient("CustomSquare")
def _custom_square_grad(op, grad):
return tf.constant(0)
x = tf.Variable(5.0)
g = tf.get_default_graph()
with g.gradient_override_map({"Square": "CustomSquare"}):
s_2 = tf.square(x, name="Square")
grad = tf.gradients(s_2, x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(grad))
推荐答案
TensorFlow 2.0 中没有内置机制来覆盖作用域内内置运算符的所有梯度.但是,如果您能够修改对内置运算符的每次调用的调用站点,则可以使用 tf.custom_gradient
装饰器,如下所示:
There is no built-in mechanism in TensorFlow 2.0 to override all gradients for a built-in operator within a scope. However, if you are able to modify the call-site for each call to the built-in operator, you can use the tf.custom_gradient
decorator as follows:
@tf.custom_gradient
def custom_square(x):
def grad(dy):
return tf.constant(0.0)
return tf.square(x), grad
with tf.Graph().as_default() as g:
x = tf.Variable(5.0)
with tf.GradientTape() as tape:
s_2 = custom_square(x)
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
print(sess.run(tape.gradient(s_2, x)))
这篇关于如何在 Tensorflow 2.0 中使用 gradient_override_map?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!