Keras中CPU和GPU的混合使用 [英] Mixture usage of CPU and GPU in Keras
问题描述
我正在Keras上构建一个神经网络,包括多层LSTM,Permute和Dense.
I am building a neural network on Keras, including multiple layers of LSTM, Permute and Dense.
LSTM似乎对GPU不友好.所以我做了研究和使用
It seems LSTM is GPU-unfriendly. So I did research and use
With tf.device('/cpu:0'):
out = LSTM(cells)(inp)
但是根据我对with
的理解,with
是try...finally
块,以确保执行清理代码.我不知道以下CPU/GPU混合使用代码是否有效?他们会加快训练速度吗?
But based on my understanding about with
, with
is try...finally
block to ensure that clean-up code is executed. I don't know whether the following CPU/GPU mixture usage code works or not? Will they accelerate speed of training?
With tf.device('/cpu:0'):
out = LSTM(cells)(inp)
With tf.device('/gpu:0'):
out = Permute(some_shape)(out)
With tf.device('/cpu:0'):
out = LSTM(cells)(out)
With tf.device('/gpu:0'):
out = Dense(output_size)(out)
推荐答案
As you may read here - tf.device
is a context manager which switches a default device to this passed as its argument in a context (block) created by it. So this code should run all '/cpu:0'
device at CPU
and rest on GPU
.
是否会加快培训速度的问题确实很难回答,因为这取决于您使用的机器-但是我不希望计算速度会更快,因为每次设备更改都会在GPU RAM
之间复制数据和机器RAM
.这甚至可能减慢您的计算速度.
The question will it speed up your training is really hard to answer because it depends on the machine you use - but I don't expect computations to be faster as each change of a device makes data to be copied between GPU RAM
and machine RAM
. This could even slow down your computations.
这篇关于Keras中CPU和GPU的混合使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!