为什么 Tensorflow 创建这么多 CPU 线程 [英] Why Tensorflow creates so many CPU threads
问题描述
即使有
inter_op_parallelism_threads = 1
intra_op_parallelism_threads = 1
值设置,TensorFlow 1.5 进程不是单线程的.为什么?有没有办法完全禁用意外的线程生成?
values set, TensorFlow 1.5 process is not single-threaded. Why? Is there a way to completely disable unexpected thread spawning?
推荐答案
首先,TensorFlow 是一个多层次的软件栈,每一层都试图变得聪明,并引入了一些自己的工作线程:
First of all, TensorFlow is a multi-level software stack, and each layer tries to be smart and introduces some worker threads of its own:
- 一个线程由 Python 运行时创建
- NVIDIA CUDA 运行时创建了另外两个线程
接下来,有一些线程源于 TensorFlow 如何管理内部计算作业:
Next, there are threads originating from the way how TensorFlow administers internal compute jobs:
- 始终创建/加入线程以轮询作业完成(GRPC 引擎)
因此,即使所有选项都设置为 1,TensorFlow 也不能是单线程的.也许,这种设计旨在减少异步作业的延迟.然而,有一个缺点:多核计算库,如线性代数,使用静态对称核心线程映射最好地执行缓存密集型操作.TensorFlow 产生的悬空回调线程会一直扰乱这种对称性.
Thus, TensorFlow cannot be single-threaded, even with all options set to 1. Perhaps, this design is intended to reduce latencies for async jobs. Yet, there is a certain drawback: multicore compute libraries, such as linear algebra, do cache-intensive operations best with static symmetric core-thread mapping. And dangling callback threads produced by TensorFlow will disturb this symmetry all the time.
这篇关于为什么 Tensorflow 创建这么多 CPU 线程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!