是否可以在张量流中跨多个GPU划分网络? [英] Is it possible to split a network across multiple GPUs in tensorflow?

查看:131
本文介绍了是否可以在张量流中跨多个GPU划分网络?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我计划运行一个非常大的循环网络(例如2048x5),是否可以在张量流中的一个GPU上定义一层?我应该如何实施该模型以获得最佳效率.我了解GPU间或GPU-CPU-GPU之间的通信会产生开销.

I plan to run a very large recurrent network (e.g. 2048x5), is it possible to define one layer at one GPU in tensorflow? How should I implement the model to achieve the best efficiency. I understand there is overhead for inter-GPU or GPU-CPU-GPU communication.

推荐答案

在TensorFlow中跨多个GPU拆分大型模型当然是可能的,但是最佳地进行建模是一项艰巨的研究问题.通常,您需要执行以下操作:

Splitting a large model across multiple GPUs is certainly possible in TensorFlow, but doing it optimally is a hard research problem. In general, you will need to do the following:

  1. 将代码的大连续区域包装在 with tf.device(...):块,命名不同的GPU:

  1. Wrap large contiguous regions of your code in a with tf.device(...): block, naming the different GPUs:

with tf.device("/gpu:0"):
  # Define first layer.

with tf.device("/gpu:1"):
  # Define second layer.

# Define other layers, etc.

  • 在构建优化器时,将可选参数colocate_gradients_with_ops=True传递给

  • When building your optimizer, pass the optional argument colocate_gradients_with_ops=True to the optimizer.minimize() method:

    loss = ...
    optimizer = tf.train.AdaGradOptimizer(0.01)
    train_op = optimizer.minimize(loss, colocate_gradients_with_ops=True)
    

  • (可选.)创建 tf.Session ,如果模型中的任何操作都无法在GPU上运行:

  • (Optionally.) You may need to enable "soft placement" in the tf.ConfigProto when you create your tf.Session, if any of the operations in your model cannot run on GPU:

    config = tf.ConfigProto(allow_soft_placement=True)
    sess = tf.Session(config=config)
    

  • 这篇关于是否可以在张量流中跨多个GPU划分网络?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆