Keras + Tensorflow:对多个GPU的预测 [英] Keras + Tensorflow: Prediction on multiple gpus

查看:673
本文介绍了Keras + Tensorflow:对多个GPU的预测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用带有Tensorflow的Keras作为后端. 我有一个经过编译/训练的模型.

I'm using Keras with tensorflow as backend. I have one compiled/trained model.

我的预测循环很慢,所以我想找到一种方法来并行化predict_proba调用以加快速度. 我想获取(数据的)批处理列表,然后针对每个可用gpu在这些批处理的子集上运行model.predict_proba().
本质上:

My prediction loop is slow so I would like to find a way to parallelize the predict_proba calls to speed things up. I would like to take a list of batches (of data) and then per available gpu, run model.predict_proba() over a subset of those batches.
Essentially:

data = [ batch_0, batch_1, ... , batch_N ]
on gpu_0 => return predict_proba(batch_0)
on gpu_1 => return predict_proba(batch_1)
...
on gpu_N => return predict_proba(batch_N) 

我知道在纯Tensorflow中可以将操作分配给给定的GPU( https://www. tensorflow.org/tutorials/using_gpu ).但是,由于我已经使用Keras的api构建/编译/训练了模型,所以我不知道这会如何转变为我的情况.

I know that it's possible in pure Tensorflow to assign ops to a given gpu (https://www.tensorflow.org/tutorials/using_gpu). However, I don't know how this translates to my situation since I've built/compiled/trained my model using Keras' api.

我曾以为也许我只需要使用python的多处理模块并为每个gpu启动一个可以运行predict_proba(batch_n)的进程.我知道我的另一篇SO帖子理论上是可行的: Keras + Tensorflow和Python中的多处理.但是,这仍然给我带来一个难题,就是不知道如何实际选择"一个gpu来对该进程进行操作.

I had thought that maybe I just needed to use python's multiprocessing module and start a process per gpu that would run predict_proba(batch_n). I know this is theoretically possible given another SO post of mine: Keras + Tensorflow and Multiprocessing in Python. However, this still leaves me with the dilemma of not knowing how to actually "choose" a gpu to operate the process on.

我的问题归结为:当使用Tensorflow作为Keras的后端时,如何在多个GPU上并行化Keras中的一种模型的预测?

My question boils down to: how does one parallelize prediction for one model in Keras across multiple gpus when using Tensorflow as Keras' backend?

另外,我很好奇,如果仅使用一个GPU就能进行类似的并行预测.

Additionally I am curious if similar parallelization for prediction is possible with only one gpu.

不胜感激的高级描述或代码示例!

A high level description or code example would be greatly appreciated!

谢谢!

推荐答案

我创建了一个简单的示例,展示了如何在多个GPU上运行keras模型.基本上,将创建多个进程,并且每个进程都拥有一个GPU.要指定正在处理的GPU ID,设置环境变量CUDA_VISIBLE_DEVICES是一种非常简单的方法(os.environ ["CUDA_VISIBLE_DEVICES"]).希望这个git repo可以为您提供帮助.

I created one simple example to show how to run keras model across multiple gpus. Basically, multiple processes are created and each of process owns a gpu. To specify the gpu id in process, setting env variable CUDA_VISIBLE_DEVICES is a very straightforward way (os.environ["CUDA_VISIBLE_DEVICES"]). Hope this git repo can help you.

https://github.com/yuanyuanli85/Keras-Multiple-Process-Prediction

这篇关于Keras + Tensorflow:对多个GPU的预测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆