在 tensorflow 中嵌套控制依赖上下文 [英] Nesting control dependencies contexts in tensorflow

查看:26
本文介绍了在 tensorflow 中嵌套控制依赖上下文的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

运行下面的测试

from unittest import TestCase

import tensorflow as tf

class TestControl(TestCase):

  def test_control_dep(self):
    print(tf.__version__)
    a = tf.get_variable('a', initializer=tf.constant(0.0))
    d_optim = tf.assign(a, a + 2)
    g_optim = tf.assign(a, a * 2)
    with tf.control_dependencies([d_optim]):
      with tf.control_dependencies([g_optim]):
        with tf.control_dependencies([g_optim]):
          op = tf.Print(a, [a])
    with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())
      sess.run(op)
      sess.run(op)
      sess.run(op)

打印(例如):

1.4.0
2018-03-18 16:58:08.943349: I C:\tf_jenkins\...\logging_ops.cc:79] [0]
2018-03-18 16:58:08.943349: I C:\tf_jenkins\...\logging_ops.cc:79] [2]
2018-03-18 16:58:08.943349: I C:\tf_jenkins\...\logging_ops.cc:79] [4]

但我也看到了 [2, 8, 10] 中的其他输出.我希望它打印 [8, 40, 168] (实际上我想确保 g_optim 会执行两次,但我不确定它会不会).为什么打印不是确定性的,为什么它似乎并不总是执行 g_optim?

but I have also seen other outputs as in [2, 8, 10]. I would expect it to print [8, 40, 168] (actually I was wanting to make sure g_optim would execute twice which I was not sure it would). Why are not the prints deterministic and why does it not seem to always execute g_optim?

注意:在 EC2(使用 tensorflow 1.6)上的 Ubuntu GPU 服务器上运行它始终产生 0:

NB: running this on an Ubuntu GPU server on EC2 (with tensorflow 1.6) produces 0 all the time:

python3 -m unittest tf_test.TestControl.test_control_dep
1.6.0
2018-03-19 08:06:11.614220: ...
2018-03-19 08:06:12.282375: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9610 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)
[0]
[0]
[0]
0.0
.
----------------------------------------------------------------------
Ran 1 test in 0.833s

OK

可能相关:

推荐答案

这不是确定性的,因为您正在创建没有控制依赖项的分配操作,因此它们以任何顺序执行.

It's not deterministic because you are creating the assign operations with no control dependencies, so they execute in any order.

要以您希望的方式执行分配,它们的操作需要在创建时具有控制依赖项.类似的东西

To execute the assignments in the way you want their ops need to have control dependencies when they're created. Something like

a = tf.get_variable('a', initializer=tf.constant(0.0))
with tf.control_dependencies([tf.assign(a, a + 2)]):
  with tf.control_dependencies([tf.assign(a, a * 2)]):
    with tf.control_dependencies([tf.assign(a, a * 2)]):
      op = tf.Print(a, [a])

您的代码所做的是构建一组两个控制依赖项,并将这些依赖项添加到 tf.Print 操作中.

What your code is doing is building up a set of two control dependencies and adding those dependencies to the tf.Print op.

这篇关于在 tensorflow 中嵌套控制依赖上下文的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆