如何在多核上运行 Keras? [英] How to run Keras on multiple cores?

查看:27
本文介绍了如何在多核上运行 Keras?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在集群上使用带有 Tensorflow 后端的 Keras(创建神经网络).如何在集群上(在多个核心上)以多线程方式运行它,或者这是由 Keras 自动完成的?例如在 Java 中可以创建多个线程,每个线程运行在一个核心上.

I'm using Keras with Tensorflow backend on a cluster (creating neural networks). How can I run it in a multi-threaded way on the cluster (on several cores) or is this done automatically by Keras? For example in Java one can create several threads, each thread running on a core.

如果可能,应该使用多少个内核?

If possible, how many cores should be used?

推荐答案

Tensorflow 自动在单台机器上可用的尽可能多的内核上运行计算.

Tensorflow automatically runs the computations on as many cores as are available on a single machine.

如果您有分布式集群,请确保按照 https://www.tensorflow 上的说明进行操作.org/how_tos/distributed/ 来配置集群.(例如,正确创建 tf.ClusterSpec 等)

If you have a distributed cluster, be sure you follow the instructions at https://www.tensorflow.org/how_tos/distributed/ to configure the cluster. (e.g. create the tf.ClusterSpec correctly, etc.)

为了帮助调试,您可以在会话中使用 log_device_placement 配置选项让 Tensorflow 打印出计算实际放置的位置.(注意:这适用于 GPU 和分布式 Tensorflow.)

To help debug, you can use the log_device_placement configuration options on the session to have Tensorflow print out where the computations are actually placed. (Note: this works for both GPUs as well as distributed Tensorflow.)

# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

请注意,虽然 Tensorflow 的计算布局算法适用于小型计算图,但通过手动将计算置于特定设备中,您可能能够在大型计算图上获得更好的性能.(例如,使用 with tf.device(...): 块.)

Note that while Tensorflow's computation placement algorithm works fine for small computational graphs, you might be able to get better performance on large computational graphs by manually placing the computations in specific devices. (e.g. using with tf.device(...): blocks.)

这篇关于如何在多核上运行 Keras?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆