张量流中的动态批次大小 [英] Dynamic batch size in tensorflow

查看:126
本文介绍了张量流中的动态批次大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用tesnorflow构建了一个分类器。我从图像生成投标区域,这些投标由我的分类器单独分类。

I have built a classifier using tesnorflow. I generate proposal regions from images and those proposals are individually classified by my classifier.

我的问题是,在评估模型时我的批次大小不是恒定的。因为每个图像的投标数量不同,所以每个图像要评估的投标数量不是恒定的。

My problem is that I do not have a constant batch size when evaluating my model. Because every image has a different number of proposals, the number of proposals to be evaluated for every image is not constant.

现在,我将批量大小设置为1,但这效率低下,并限制了我的分类器的处理速度。

Right now I have set the batch size to 1, but this is inefficient and limits the processing speed of my classifier.

以下是模型输入的占位符

Below is the placeholder for the input to the model

self.image_op = tf.placeholder(tf.float32, shape=[batch_size, 48, 48, 3], name='input_image')

这就是我将输入馈入模型的方式

And this is how I feed the input to the model

def predict(self,image):
    cls_prob = self.sess.run([self.cls_prob], feed_dict={self.image_op: image})
    return cls_prob

有什么方法可以将批处理大小设置为动态值,而不必为每个图像恢复模型?

Is there any way of setting the batch size to a dynamic value without having to restore the model for every image?

推荐答案

您只需设置 tf.Variable(validate_shape = False)

这将在迭代中禁用形状验证,因此您将能够使用动态批处理大小。

This will disable the validation of shape on iterations and therefore you will be able to use dynamic batch sizes.

由于tf.placeholder被贬值,您不应使用它,但是如果您仍然想使用tf.placeholder,则需要禁用TF 2.x行为

Since tf.placeholder is being depreciated you should not use it, but if you still want to use tf.placeholder then you need to disable TF 2.x behaviour

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

这篇关于张量流中的动态批次大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆