如何在 TensorFlow 中应用梯度裁剪? [英] How to apply gradient clipping in TensorFlow?

查看:42
本文介绍了如何在 TensorFlow 中应用梯度裁剪?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

考虑示例代码.

我想知道如何在可能出现梯度爆炸的 RNN 网络上应用梯度裁剪.

I would like to know How to apply gradient clipping on this network on the RNN where there is a possibility of exploding gradients.

tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)

这是一个可以使用的例子,但我在哪里介绍这个?在 RNN 的定义中

This is an example that could be used but where do I introduce this ? In the def of RNN

    lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
    # Split data because rnn cell needs a list of inputs for the RNN inner loop
    _X = tf.split(0, n_steps, _X) # n_steps
tf.clip_by_value(_X, -1, 1, name=None)

但这没有意义,因为张量 _X 是输入而不是要裁剪的 grad 什么?

But this doesn't make sense as the tensor _X is the input and not the grad what is to be clipped?

我必须为此定义自己的优化器还是有更简单的选择?

Do I have to define my own Optimizer for this or is there a simpler option?

推荐答案

梯度裁剪需要在计算梯度之后,但在应用梯度更新模型参数之前发生.在您的示例中,这两件事都由 AdamOptimizer.minimize() 方法处理.

Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In your example, both of those things are handled by the AdamOptimizer.minimize() method.

为了裁剪您的渐变,您需要按照 TensorFlow 的 API 文档中的这一部分.具体来说,您需要使用以下内容替换对 minimize() 方法的调用:

In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in this section in TensorFlow's API documentation. Specifically you'll need to substitute the call to the minimize() method with something like the following:

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
gvs = optimizer.compute_gradients(cost)
capped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs]
train_op = optimizer.apply_gradients(capped_gvs)

这篇关于如何在 TensorFlow 中应用梯度裁剪?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆