Tensorflow-是否可以手动确定图形中的哪些张量到达GPU和CPU? [英] Tensorflow - Is it possible to manually decide which tensors in a graph go to the GPU and CPU?

查看:307
本文介绍了Tensorflow-是否可以手动确定图形中的哪些张量到达GPU和CPU?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在Keras中创建了一个同时使用卷积层和LSTM层的网络.

I've created a network in Keras that uses both convolutional and LSTM layers.

我已经读过,tensorflow无法很好地处理GPU中的LSTM层:

And I've read that tensorflow can't handle well the LSTM layers in GPU: http://minimaxir.com/2017/07/cpu-or-gpu/

实际上,我的网络在GPU中变慢了(希望它不再是一个问题).

In fact, my network got slow in the GPU (hoping its not a further problem).

我希望张量流将所有卷积运算扔给GPU,但将所有LSTM运算保留在CPU中.有可能吗?

I want tensorflow to throw all convolutional operations to the GPU, but keep all the LSTM operations in the CPU. Is that possible?

那么,考虑到我可以找到张量流图并确定其每个张量,是否可以手动定义每个张量在何处执行其操作?

So, considering I can find the tensorflow graph and identify each of its tensors, can I define manually where each tensor performs its operations?

推荐答案

以下这个网络,您可以(如果使用Tensorflow)尝试以下技术:

Following implementation of this network, you could (if you use Tensorflow) try the following technique:

with tf.device('/cpu:0'):
    # model layers which should be run on CPU go here

这篇关于Tensorflow-是否可以手动确定图形中的哪些张量到达GPU和CPU?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆