inter_op_parallelism_threads和intra_op_parallelism_threads的含义 [英] Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads

查看:721
本文介绍了inter_op_parallelism_threads和intra_op_parallelism_threads的含义的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有人可以解释以下TensorFlow条款

Can somebody please explain the following TensorFlow terms

  1. inter_op_parallelism_threads

intra_op_parallelism_threads

或者,请提供指向正确说明源的链接.

or, please, provide links to the right source of explanation.

我通过更改参数进行了一些测试,但结果并不一致,无法得出结论.

I have conducted a few tests by changing the parameters, but the results have not been consistent to arrive at a conclusion.

推荐答案

inter_op_parallelism_threadsintra_op_parallelism_threads选项记录在

The inter_op_parallelism_threads and intra_op_parallelism_threads options are documented in the source of the tf.ConfigProto protocol buffer. These options configure two thread pools used by TensorFlow to parallelize execution, as the comments describe:

// The execution of an individual op (for some op types) can be
// parallelized on a pool of intra_op_parallelism_threads.
// 0 means the system picks an appropriate number.
int32 intra_op_parallelism_threads = 2;

// Nodes that perform blocking operations are enqueued on a pool of
// inter_op_parallelism_threads available in each process.
//
// 0 means the system picks an appropriate number.
//
// Note that the first Session created in the process sets the
// number of threads for all future sessions unless use_per_session_threads is
// true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;

运行TensorFlow图时,有几种可能的并行形式,这些选项提供了一些控制多核CPU并行性:

There are several possible forms of parallelism when running a TensorFlow graph, and these options provide some control multi-core CPU parallelism:

  • 如果您具有可以在内部并行化的操作,例如矩阵乘法(tf.matmul())或归约(例如tf.reduce_sum()),TensorFlow将通过在线程池中使用线程.因此,此配置选项控制单个操作的最大并行加速.请注意,如果您并行运行多个操作,则这些操作将共享该线程池.

  • If you have an operation that can be parallelized internally, such as matrix multiplication (tf.matmul()) or a reduction (e.g. tf.reduce_sum()), TensorFlow will execute it by scheduling tasks in a thread pool with intra_op_parallelism_threads threads. This configuration option therefore controls the maximum parallel speedup for a single operation. Note that if you run multiple operations in parallel, these operations will share this thread pool.

如果TensorFlow图中有许多独立的操作-因为在数据流图中它们之间没有直接的路径-TensorFlow将尝试使用具有inter_op_parallelism_threads线程的线程池并发运行它们.如果这些操作具有多线程实现,则它们(在大多数情况下)将共享同一线程池以进行操作内并行操作.

If you have many operations that are independent in your TensorFlow graph—because there is no directed path between them in the dataflow graph—TensorFlow will attempt to run them concurrently, using a thread pool with inter_op_parallelism_threads threads. If those operations have a multithreaded implementation, they will (in most cases) share the same thread pool for intra-op parallelism.

最后,两个配置选项都采用默认值0,这意味着系统选择了适当的数字".当前,这意味着每个线程池在您的计算机中每个CPU内核将有一个线程.

Finally, both configuration options take a default value of 0, which means "the system picks an appropriate number." Currently, this means that each thread pool will have one thread per CPU core in your machine.

这篇关于inter_op_parallelism_threads和intra_op_parallelism_threads的含义的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆