如何在张量流中交替训练op? [英] How to alternate train op's in tensorflow?

查看:165
本文介绍了如何在张量流中交替训练op?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在实施一项轮换培训计划.该图包含两个训练操作.培训应该在这两者之间交替进行.

I am implementing an alternating training scheme. The graph contains two training ops. The training should alternate between these.

这与下面是一个小例子.但这似乎在每个步骤上都更新了两个操作.我该如何在这些之间显式交替?

Below is a small example. But it seems to update both the ops at every step. How can I explicitly alternate between these?

from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
# Import data
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)

# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]), name='weights')
b = tf.Variable(tf.zeros([10]), name='biases')
y = tf.matmul(x, W) + b

# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
global_step = tf.Variable(0, trainable=False)

tvars1 = [b]
train_step1 = tf.train.GradientDescentOptimizer(0.5).apply_gradients(zip(tf.gradients(cross_entropy, tvars1), tvars1), global_step)
tvars2 = [W]
train_step2 = tf.train.GradientDescentOptimizer(0.5).apply_gradients(zip(tf.gradients(cross_entropy, tvars2), tvars2), global_step)
train_step = tf.cond(tf.equal(tf.mod(global_step,2), 0), true_fn= lambda:train_step1, false_fn=lambda : train_step2)


sess = tf.InteractiveSession()
tf.global_variables_initializer().run()


# Train
for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
    if i % 100 == 0:
        print(sess.run([cross_entropy, global_step], feed_dict={x: mnist.test.images,
                                         y_: mnist.test.labels}))

这将导致

[2.0890141, 2]
[0.38277805, 202]
[0.33943111, 402]
[0.32314575, 602]
[0.3113254, 802]
[0.3006627, 1002]
[0.2965056, 1202]
[0.29858461, 1402]
[0.29135355, 1602]
[0.29006076, 1802]      

全局步骤迭代到1802,因此每次调用train_step时都执行两个火车操作. (例如,当始终为假的条件为tf.equal(global_step,-1)时,也会发生这种情况.)

The global step iterates to 1802, so both train ops are executed each time train_step is called. (This also happens when the always-false condition is tf.equal(global_step,-1) for example.)

我的问题是如何在执行train_step1train_step2之间交替?

My question is how to alternate between executing train_step1 and train_step2?

推荐答案

我认为最简单的方法就是

I think the simplest way is just

for i in range(1000):
  batch_xs, batch_ys = mnist.train.next_batch(100)
  if i % 2 == 0:
    sess.run(train_step1, feed_dict={x: batch_xs, y_: batch_ys})
  else:
    sess.run(train_step2, feed_dict={x: batch_xs, y_: batch_ys})

但是如果有必要通过tensorflow条件流进行切换,请按照以下方式进行操作:

But if it's necessary to make a switch via tensorflow conditional flow, do it this way:

optimizer = tf.train.GradientDescentOptimizer(0.5)
train_step = tf.cond(tf.equal(tf.mod(global_step, 2), 0),
                     true_fn=lambda: optimizer.apply_gradients(zip(tf.gradients(cross_entropy, tvars1), tvars1), global_step),
                     false_fn=lambda: optimizer.apply_gradients(zip(tf.gradients(cross_entropy, tvars2), tvars2), global_step))

这篇关于如何在张量流中交替训练op?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆