Keras中CPU和GPU的混合使用 [英] Mixture usage of CPU and GPU in Keras

查看:292
本文介绍了Keras中CPU和GPU的混合使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在Keras上构建一个神经网络,包括多层LSTM,Permute和Dense.

I am building a neural network on Keras, including multiple layers of LSTM, Permute and Dense.

LSTM似乎对GPU不友好.所以我做了研究和使用

It seems LSTM is GPU-unfriendly. So I did research and use

With tf.device('/cpu:0'):
   out = LSTM(cells)(inp)

但是根据我对with的理解,withtry...finally块,以确保执行清理代码.我不知道以下CPU/GPU混合使用代码是否有效?他们会加快训练速度吗?

But based on my understanding about with, with is try...finally block to ensure that clean-up code is executed. I don't know whether the following CPU/GPU mixture usage code works or not? Will they accelerate speed of training?

With tf.device('/cpu:0'):
  out = LSTM(cells)(inp)
With tf.device('/gpu:0'):
  out = Permute(some_shape)(out)
With tf.device('/cpu:0'):
  out = LSTM(cells)(out)
With tf.device('/gpu:0'):
  out = Dense(output_size)(out)

推荐答案

您可能会阅读

As you may read here - tf.device is a context manager which switches a default device to this passed as its argument in a context (block) created by it. So this code should run all '/cpu:0' device at CPU and rest on GPU.

是否会加快培训速度的问题确实很难回答,因为这取决于您使用的机器-但是我不希望计算速度会更快,因为每次设备更改都会在GPU RAM之间复制数据和机器RAM.这甚至可能减慢您的计算速度.

The question will it speed up your training is really hard to answer because it depends on the machine you use - but I don't expect computations to be faster as each change of a device makes data to be copied between GPU RAM and machine RAM. This could even slow down your computations.

这篇关于Keras中CPU和GPU的混合使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆