如何在张量流中处理不同的队列批量大小和提要值批量大小? [英] How to handle different queue batch size and feed value batch size in tensorflow?
问题描述
我的代码曾经适用于 tensorflow 0.6,但它不再适用于最新的 tensorflow.
My code used to work on tensorflow 0.6, but it no longer works on the lastest tensorflow.
我想每隔几次训练迭代执行一次推理.我的训练数据是从队列中提取的,我的推理数据来自 feed_dict.训练批大小为 128,推理批大小为 1.我应该怎么做才能让网络接受两种不同的批大小?
I would like to perform inference every few training iterations. My training data is pulled from a queue, my inference data is from feed_dict. The training batch size is 128 while the inference batch size is 1. What Should I do to make the network accept the two different batch sizes?
batch_size = 128
x_batch = tf.placeholder("float", [None, 100])
q = tf.FIFOQueue(10, [tf.float32], shapes=[[batch_size, 100]])
enqueue_op = q.enqueue([x_batch])
# during training
x = q.dequeue() # dequeue operation
# network definition, takes x as input, and output y
......
# during inference
x_array_of_batch_size_1 = .. # a 1x100 numpy array
sess.run([y], feed_dict={x: x_array_of_batch_size_1))
我收到以下错误:
ValueError: Cannot feed value of shape (1, 100) for Tensor u'fifo_queue_Dequeue:0', which has shape '(128, 100)'
推荐答案
我们最近添加了此检查以防止错误(并添加一些优化机会).您可以通过将 x
的声明更改为使用新的 tf.placeholder_with_default()
操作来使您的程序再次运行:
We added this check recently to prevent errors (and add a few optimization opportunities). You can make your program work again by changing the declaration of x
to use the new tf.placeholder_with_default()
op:
x = tf.placeholder_with_default(q.dequeue(), shape=[None, 100])
这篇关于如何在张量流中处理不同的队列批量大小和提要值批量大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!