TensorFlow 是否自动并行化独立操作? [英] Does TensorFlow automatically parallelize independent operations?

查看:28
本文介绍了TensorFlow 是否自动并行化独立操作?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我在 TensorFlow(Python 接口)中有以下代码行:

Let's say I have the following line of code in TensorFlow (Python interface):

z = tf.matmul(W_1,x_1) + tf.matmul(W_2,x_2) + ... + tf.matmul(W_N, x_N) + b

以上N个操作都是独立的,结果累加在z中.例如,TensorFlow 会独立启动 N 个内核然后累加结果,还是会串联处理 N 个操作?

All of the above N operations are independent, and the result is accumulated in z. Will TensorFlow, for example, launch N kernels independently and then accumulate the result, or will it process N operations in series?

我之所以这么问是因为这会影响我需要花费多少精力来进行矢量化操作,但代价是降低了可读性和便利性.我希望的是 TF 异步启动所有 N 个 GPU 内核,在 z 中累加输出,并返回结果.

I ask because this has an impact on how much effort I need to expend to vectorize operations, at the expense of reduced readability and convenience. What I am hoping is that TF launches all N GPU kernels asynchronously, accumulates the output in z, and returns the result.

另外,假设 TF 确实并行处理上述语句,是否有任何限制?例如,如果我要在 for 循环中(或在具有中间变量的多行中)累加 z,我会失去这种好处吗?

Additionally, assuming TF does process the above statement in parallel, are there any limitations on this? For instance, if I was to accumulate z in a for loop (or over several lines with intermediate variables), would I lose this benefit?

推荐答案

是的,它并行运行单个 session.run 调用的多个计算路径,由 num_inter_device_parallelism_threads 控制代码>参数.您可以使用 tf.add_n 作为您的总和.如果您有多个 session.run,您需要自己并行处理,例如,在单独的 Python 线程中启动它们.

Yes, it runs multiple paths of computation of a single session.run call in parallel, controlled by num_inter_device_parallelism_threads parameter. You can use tf.add_n for your sum. If you have multiple session.run you need to parallelize things yourself, by, say, launching them in separate Python threads.

这篇关于TensorFlow 是否自动并行化独立操作?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆