如何通过Theano使Keras能够利用多个GPU [英] How to enable Keras with Theano to utilize multiple GPUs

查看:166
本文介绍了如何通过Theano使Keras能够利用多个GPU的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

设置:

  • 使用具有Nvidia GPU的Amazon Linux系统
  • 我正在使用Keras 1.0.1
  • 运行Theano v0.8.2后端
  • 使用CUDA和CuDNN
  • THEANO_FLAGS ="device = gpu,floatX = float32,lib.cnmem = 1"

一切正常,但是当我增加批次大小以加快训练速度时,大型模型上的视频内存就用完了.我认为迁移到4 GPU系统理论上可以提高可用的总内存或允许较小的批处理更快地构建,但是观察nvidia的统计信息,我可以看到默认情况下仅使用一个GPU:

Everything works fine, but I run out of video memory on large models when I increase the batch size to speed up training. I figure moving to a 4 GPU system would in theory either improve total memory available or allow smaller batches to build faster, but observing the the nvidia stats, I can see only one GPU is used by default:

+------------------------------------------------------+ 
| NVIDIA-SMI 361.42     Driver Version: 361.42         |         
|-------------------------------+----------------------+----------------------+ 
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC | 
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |    
|===============================+======================+======================| 
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A | 
| N/A   44C    P0    45W / 125W |   3954MiB /  4095MiB |     94% Default      |
+-------------------------------+----------------------+----------------------+ 
|   1  GRID K520           Off  | 0000:00:04.0     Off |               N/A    | 
| N/A   28C    P8    17W / 125W |     11MiB /  4095MiB |        0% Default    |
+-------------------------------+----------------------+----------------------+ 
|   2  GRID K520           Off  | 0000:00:05.0     Off |               N/A    | 
| N/A   32C    P8    17W / 125W |     11MiB /  4095MiB |           0% Default |
+-------------------------------+----------------------+----------------------+ 
|   3  GRID K520           Off  | 0000:00:06.0     Off |                N/A   |     
| N/A   29C    P8    17W / 125W |     11MiB /  4095MiB |           0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+ 
| Processes:                                                       GPU Memory | 
|  GPU       PID  Type  Process name                               Usage      | 
|=============================================================================| 
|    0      9862    C   python34                                      3941MiB |

我知道使用原始Theano可以显式地手动使用多个GPU. Keras是否支持使用多个GPU?如果是这样,它是否将其抽象化?还是需要像Theano一样将GPU映射到设备,并明确将计算编组到特定的GPU?

I know with raw Theano you can use manually multiple GPU's explicitly. Does Keras support use of multiple GPU's? If so, does it abstract it or do you need to map the GPU's to devices as in Theano and explicitly marshall computations to specific GPU's?

推荐答案

多GPU培训是

Multi-GPU training is experimental ("The code is rather new and is still considered experimental at this point. It has been tested and seems to perform correctly in all cases observed, but make sure to double-check your results before publishing a paper or anything of the sort.") and hasn't been integrated into Keras yet. However, you can use multiple GPUs with Keras with the Tensorflow backend: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html#multi-gpu-and-distributed-training.

这篇关于如何通过Theano使Keras能够利用多个GPU的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆