TensorFlow 是否将一台机器的所有 CPU 视为一台设备? [英] Does TensorFlow view all CPUs of one machine as ONE device?

查看:25
本文介绍了TensorFlow 是否将一台机器的所有 CPU 视为一台设备?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

从我运行的实验来看,TensorFlow 似乎在一台机器上自动使用所有 CPU.此外,TensorFlow 似乎将所有 CPU 称为/cpu:0.

From the experiments I run, it seems like TensorFlow uses automatically all CPUs on one machine. Furthermore, it seems like TensorFlow refers to all CPUs as /cpu:0.

我是对的,只有一台机器的不同 GPU 被索引并被视为单独的设备,但一台机器上的所有 CPU 被视为单个设备吗?

Am I right, that only the different GPUs of one machine get indexed and viewed as separate devices, but all the CPUs on one machine get viewed as a single device?

有没有办法让一台机器可以有多个 CPU 从 TensorFlows 的角度查看它?

Is there any way that a machine can have multiple CPUs viewing it from TensorFlows perspective?

推荐答案

默认情况下,进程可用的所有 CPU 都聚集在 cpu:0 设备下.

By default all CPUs available to the process are aggregated under cpu:0 device.

mrry 此处给出了答案,展示了如何创建诸如 /cpu:1 之类的逻辑设备, /cpu:2

There's answer by mrry here showing how to create logical devices like /cpu:1, /cpu:2

似乎没有将逻辑设备固定到特定物理核心或能够在 tensorflow 中使用 NUMA 节点的工作功能.

There doesn't seem to be working functionality to pin logical devices to specific physical cores or be able to use NUMA nodes in tensorflow.

一种可能的解决方法是在一台机器上使用具有多个进程的分布式 TensorFlow,并在 Linux 上使用 taskset 将特定进程固定到特定内核

A possible work-around is to use distributed TensorFlow with multiple processes on one machine and use taskset on Linux to pin specific processes to specific cores

这篇关于TensorFlow 是否将一台机器的所有 CPU 视为一台设备?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆