循环运行 label_image.py [英] Running label_image.py in a loop

查看:25
本文介绍了循环运行 label_image.py的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标是不断对来自视频流的 .jpg 图像进行分类.

My goal is to continuously classify .jpg images coming from a video stream.

为此,我刚刚修改了 label_image.py 示例.

To do so I have just modified the label_image.py example.

我正在加载图表并预先打开会话.然后我只在循环中运行以下代码:

I'm loading the graph and opening the sessions beforehand. Then I'm only running the following code in a loop :

t = read_tensor_from_image_file(file_name,
                                input_height=input_height,
                                input_width=input_width,
                                input_mean=input_mean,
                                input_std=input_std)


input_operation = graph.get_operation_by_name(input_name);
output_operation = graph.get_operation_by_name(output_name);

results = sess2.run(output_operation.outputs[0],
                  {input_operation.outputs[0]: t}
                  )

results = np.squeeze(results)

top_k = results.argsort()[-5:][::-1]
labels = load_labels(label_file)

它在几分钟内运行良好,但问题是每个循环分类都会逐渐减慢.它在一分钟内从半秒到几秒.我的内存使用量也在缓慢上升,大约每 3 秒增加 1 MB.

It works well for a few minutes but the problem is that every cycle the classification slows down progressively. It goes from half a second to a few seconds in one minute. My memory usage is also going up slowly, about a 1 MB increase every 3 seconds.

如果我多次对单个图像进行分类,而忽略read_tensor_from_image_file",则不会出现此错误.

If I classify a single image multiple times, leaving out the "read_tensor_from_image_file", I don't get this bug.

因此图像加载代码中的某些内容每次都必须占用更多空间,无法正确清理:

So something in the image loading code must be taking up more space every time, not clearing up properly :

def read_tensor_from_image_file(file_name, input_height=192, input_width=192,
                                input_mean=0, input_std=255):
  input_name = "file_reader"
  output_name = "normalized"
  file_reader = tf.read_file(file_name, input_name)
  if file_name.endswith(".png"):
    image_reader = tf.image.decode_png(file_reader, channels = 3,
                                       name='png_reader')
  elif file_name.endswith(".gif"):
    image_reader = tf.squeeze(tf.image.decode_gif(file_reader,
                                                  name='gif_reader'))
  elif file_name.endswith(".bmp"):
    image_reader = tf.image.decode_bmp(file_reader, name='bmp_reader')
  else:
    image_reader = tf.image.decode_jpeg(file_reader, channels = 3,
                                        name='jpeg_reader')
  float_caster = tf.cast(image_reader, tf.float32)
  dims_expander = tf.expand_dims(float_caster, 0);
  resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
  normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])

  result = sess1.run(normalized)


  return result

非常感谢每一个建议,我完全坚持这个.

Every suggestion is very much appreciated, I'm totally stuck on this one.

我在带有树莓派 jessie 的树莓派上使用带有 tensorflow 1.1.0 的 python 3.4.2.

I'm using python 3.4.2 with tensorflow 1.1.0 on a raspberry pi with raspbian jessie.

非常感谢!

推荐答案

每次调用 read_tensor_from_image_file 时,都会在 TensorFlow 图中创建很多新节点.正如您所说,此函数在您的代码中循环调用,因此它会在每次迭代中动态创建许多新的图形节点.这可能是内存使用增加和缓慢的原因.

Every time you call read_tensor_from_image_file, it would create a lot of new nodes in the TensorFlow graph. As you said, this function is called in a loop in your code, so it would dynamically create a lot of new graph nodes in every iteration. This might be the reason of memory usage increase and slowness.

更好的方法是创建一次图形,然后在循环中运行该图形.例如,您可以修改您的 read_tensor_from_image_file 如下:

A better way is to create the graph once, and then just run the graph in your loop. For example, you can modify your read_tensor_from_image_file as follows:

def read_tensor_from_image_file(input_height=192, input_width=192, input_mean=0, input_std=255):
  input_name = "file_reader"
  output_name = "normalized"

  # [NEW] make file_name as a placeholder.
  file_name = tf.placeholder("string", name="fname")

  file_reader = tf.read_file(file_name, input_name)
  ...
  normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])

  # [NEW] don't call sess1 when building graph.
  # result = sess1.run(normalized)    
  # return result
  return normalized

在您的服务器中,您只需调用一次 read_tensor_from_image_file,并将其保存为 read_tensor_from_image_file_op =read_tensor_from_image_file(...) 某处.

In your server, you only invoke read_tensor_from_image_file once, and save it as read_tensor_from_image_file_op =read_tensor_from_image_file(...) somewhere.

在你的循环中,你可以简单地调用:

In your loop, you can simply call:

t = sess2.run(read_tensor_from_image_file_op, feed_dict={"fname:0": file_name})

input_operation = graph.get_operation_by_name(input_name);
output_operation = graph.get_operation_by_name(output_name);
results = sess2.run(output_operation.outputs[0],
                  {input_operation.outputs[0]: t}
                  )
results = np.squeeze(results)

top_k = results.argsort()[-5:][::-1]
labels = load_labels(label_file)

希望有帮助.

这篇关于循环运行 label_image.py的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆