如何在张量流中累积梯度? [英] How to accumulate gradients in tensorflow?

查看:77
本文介绍了如何在张量流中累积梯度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个类似于此问题的问题

因为我的资源有限,并且我使用的是深模型(VGG-16)-用于训练三重态网络-我想累积128批大小的渐变一个训练示例,然后传播错误并更新权重。

Because I have limited resources and I work with a deep model (VGG-16) - used to train a triplet network - I want to accumulate gradients for 128 batches of size one training example, and then propagate the error and update the weights.

我不清楚如何执行此操作。我使用tensorflow,但是任何实现/伪代码都是受欢迎的。

It's not clear to me how do I do this. I work with tensorflow but any implementation/pseudocode is welcome.

推荐答案

让我们逐一介绍您喜欢的答案之一中提出的代码:

Let's walk through the code proposed in one of the answers you liked to:

## Optimizer definition - nothing different from any classical example
opt = tf.train.AdamOptimizer()

## Retrieve all trainable variables you defined in your graph
tvs = tf.trainable_variables()
## Creation of a list of variables with the same shape as the trainable ones
# initialized with 0s
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]

## Calls the compute_gradients function of the optimizer to obtain... the list of gradients
gvs = opt.compute_gradients(rmse, tvs)

## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order)
accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]

## Define the training step (part with variable value update)
train_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)])

第一部分基本上添加了新的变量 ops 到您的图形,这将允许您

This first part basically adds new variables and ops to your graph which will allow you to


  1. 使用变量 accum_vars
  2. (b)列表中的ops accum_ops 累积梯度$ b
  3. 使用ops train_step

  1. Accumulate the gradient with ops accum_ops in (the list of) variable accum_vars
  2. Update the model weights with ops train_step

更新模型权重,然后,要在训练时使用它,您必须遵循以下步骤(仍从链接的答案中进行):

Then, to use it when training, you have to follow these steps (still from the answer you linked):

## The while loop for training
while ...:
    # Run the zero_ops to initialize it
    sess.run(zero_ops)
    # Accumulate the gradients 'n_minibatches' times in accum_vars using accum_ops
    for i in xrange(n_minibatches):
        sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i]))
    # Run the train_step ops to update the weights based on your accumulated gradients
    sess.run(train_step)

这篇关于如何在张量流中累积梯度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆