关于定义input_placeholder来预测给定图像 [英] Regarding defining input_placeholder to predict over a given image

查看:93
本文介绍了关于定义input_placeholder来预测给定图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在以下功能中,作者旨在基于vgg模型对下载的图像进行预测.

In the following function, author aims to run prediction over a downloaded image based on vgg model.

with tf.Graph().as_default():

url = ("https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg")

image_string = urllib2.urlopen(url).read()
image = tf.image.decode_jpeg(image_string, channels=3)

image_float = tf.to_float(image, name='ToFloat')

# Subtract the mean pixel value from each pixel
processed_image = _mean_image_subtraction(image_float,
                                          [_R_MEAN, _G_MEAN, _B_MEAN])

input_image = tf.expand_dims(processed_image, 0)

with slim.arg_scope(vgg.vgg_arg_scope()):


    logits, _ = vgg.vgg_16(input_image,
                           num_classes=1000,
                           is_training=False,
                           spatial_squeeze=False)
pred = tf.argmax(logits, dimension=3)

init_fn = slim.assign_from_checkpoint_fn(
    os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
    slim.get_model_variables('vgg_16'))

with tf.Session() as sess:
    init_fn(sess)
    segmentation, np_image, np_logits = sess.run([pred, image, logits])

我一直在尝试预测通过opencv读取的现有图像,我所做的唯一修改是通过cv2读取图像,添加input_placeholder并相应地修改sess.run.但是,我收到以下错误消息:

I have been trying to predict over an existing image read via opencv, the only modification I made is to read image via cv2, add input_placeholder, and modify sess.run correspondingly. However, I got the following error message:

segmentation, np_image, np_logits = sess.run([pred,logits],feed_dict={input_placeholder:image})
ValueError: need more than 2 values to unpack

您想让我知道我所做的哪一次修改是错误的吗?

Would you like to let me know which modification I made is wrong?

with tf.Graph().as_default():

image = cv2.imread('/data/cat.jpg',cv2.IMREAD_UNCHANGED)
input_placeholder = tf.placeholder(tf.float32,shape = [image.shape[0],image.shape[1],image.shape[2]])
image_float = np.float32(image)

# Subtract the mean pixel value from each pixel
processed_image = _mean_image_subtraction(image_float,[_R_MEAN, _G_MEAN, _B_MEAN])

input_image = tf.expand_dims(processed_image, 0)

with slim.arg_scope(vgg.vgg_arg_scope()):

    logits, _ = vgg.vgg_16(input_image,
                           num_classes=1000,
                           is_training=False,
                           spatial_squeeze=False)


pred = tf.argmax(logits, dimension=3)

init_fn = slim.assign_from_checkpoint_fn(
    os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
    slim.get_model_variables('vgg_16'))

with tf.Session() as sess:
    init_fn(sess)
    segmentation, np_image, np_logits = sess.run([pred,logits],feed_dict={input_placeholder:image})

推荐答案

供参考,首先请看一下官方文档: https://www.tensorflow.org/api_docs/python/tf/Session#run

For reference first take a look to the official documentation: https://www.tensorflow.org/api_docs/python/tf/Session#run

对于传递给sess.run()fetches参数的每个图形元素,您将获得一个返回值.在您的情况下,您将以下列表作为访存传递:[pred,logits],因此sess.run([pred,logits], ...)将返回2个值:运行pred op和logits op的结果.

For each graph element passed to the fetches parameter of sess.run() you get one value returned. In your case you are passing the following list as fetches: [pred,logits] so sess.run([pred,logits], ...) will return 2 values: the result of running the pred op and the logits op.

引用文档

run()返回的值与提取的形状相同 参数,其中叶子被相应的值替换 由TensorFlow返回.

The value returned by run() has the same shape as the fetches argument, where the leaves are replaced by the corresponding values returned by TensorFlow.

但是在这一行

segmentation, np_image, np_logits = sess.run([pred,logits],feed_dict={input_placeholder:image})

您试图将这2个值分配给3个不同的python变量(segmentationnp_imagenp_logits),因此得到了ValueError.

you are trying to assign these 2 values to 3 different python variables (segmentation, np_image, np_logits) hence you get the ValueError.

如果您查看提供的原始示例,则最后一行是:

If you look at the original example you provided, the final line is:

segmentation, np_image, np_logits = sess.run([pred, image, logits])

要模仿原始示例,请尝试从代码中删除np_image声明,如下所示:

To mimic the original example try removing the np_image declaration from your code like so:

segmentation, np_logits = sess.run([pred,logits],feed_dict={input_placeholder:image})

这篇关于关于定义input_placeholder来预测给定图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆