Tensorflow:多 GPU 单输入队列 [英] Tensorflow: Multi-GPU single input queue

查看:38
本文介绍了Tensorflow:多 GPU 单输入队列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

tensorflow 的 cifar10 多 GPU 示例中,似乎(如果我错了,请纠正我)每个 GPU 创建一个训练图像队列.难道正确"的做事方式不是让一个队列喂饱所有的塔吗?如果是这样,是否有可用的共享队列示例?

In tensorflow's cifar10 multi-GPU example, it seems (correct me if I am wrong) that one queue of training images is created per GPU. Wouldn't the "right" way of doing things be to have a single queue feeding all of the towers? If so, is there an example available of a shared queue?

推荐答案

CIFAR-10 模型的代码使用多个输入队列(通过多次调用 cifar10.distorted_inputs() 通过 cifar10.tower_loss()).

You're correct that the code for the CIFAR-10 model uses multiple input queues (through multiple calls to cifar10.distorted_inputs() via cifar10.tower_loss()).

在 GPU 之间使用共享队列的最简单方法是执行以下操作:

The easiest way to use a shared queue between the GPUs would be to do the following:

  1. 将批量大小增加 N 倍,其中 N 是 GPU 的数量.

  1. Increase the batch size by a factor of N, where N is the number of GPUs.

将调用 cifar10.distorted_inputs() 移出 cifar10.tower_loss()循环.

Move the call to cifar10.distorted_inputs() out of cifar10.tower_loss() and outside the loop over GPUs.

imageslabels 张量从 cifar10.distorted_inputs() 沿第 0(批次)维度拆分:

Split the images and labels tensors that are returned from cifar10.distorted_inputs() along the 0th (batch) dimension:

images, labels = cifar10.distorted_inputs()
split_images = tf.split(0, FLAGS.num_gpus, images)
split_labels = tf.split(0, FLAGS.num_gpus, labels)

  • 修改 cifar10.tower_loss() 以获取 imageslabels 参数,并按如下方式调用它:

  • Modify cifar10.tower_loss() to take images and labels arguments, and invoke it as follows:

    for i in xrange(FLAGS.num_gpus):
      with tf.device('/gpu:%d' % i):
        with tf.name_scope('%s_%d' % (cifar10.TOWER_NAME, i)) as scope:
    
          loss = tower_loss(scope, split_images[i], split_labels[i])
    

  • 这篇关于Tensorflow:多 GPU 单输入队列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆