TensorFlow 中的图像转换随着时间的推移而变慢 [英] Image conversion in TensorFlow slows over time

查看:44
本文介绍了TensorFlow 中的图像转换随着时间的推移而变慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这个方法可以获取图像并将其转换为张量.我在循环中调用它,转换的执行时间开始很小并不断增长.

I have this method that takes an image and converts it into a tensor. I am invoking it in a loop and the execution time of the conversion starts small and keep growing.

def read_tensor_from_image_file(file_name, input_height=299, input_width=299, input_mean=0, input_std=255):
    input_name = "file_reader"
    output_name = "normalized"
    file_reader = tf.read_file(file_name, input_name)

    image_reader = tf.image.decode_jpeg(file_reader, channels = 3,
        name='jpeg_reader')
    float_caster = tf.cast(image_reader, tf.float32)
    dims_expander = tf.expand_dims(float_caster, 0)
    resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
    normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
    sess = tf.Session()
    result = sess.run(normalized)

    return result

我该如何优化?

推荐答案

这个问题几乎可以肯定是由于在多次调用 read_tensor_from_image_file() 函数.解决这个问题的最简单方法是在函数体周围添加一个 with tf.Graph().as_default(): 块,如下所示:

The problem is almost certainly due to the use of the same default tf.Graph across many calls to your read_tensor_from_image_file() function. The easiest way to fix this is to add a with tf.Graph().as_default(): block around the function body, as follows:

def read_tensor_from_image_file(file_name, input_height=299, input_width=299, input_mean=0, input_std=255):
   with tf.Graph().as_default():
    input_name = "file_reader"
    output_name = "normalized"
    file_reader = tf.read_file(file_name, input_name)
    image_reader = tf.image.decode_jpeg(file_reader, channels = 3,
        name='jpeg_reader')
    float_caster = tf.cast(image_reader, tf.float32)
    dims_expander = tf.expand_dims(float_caster, 0)
    resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
    normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
    sess = tf.Session()
    result = sess.run(normalized)
    return result

通过此更改,每次调用该函数都会创建一个新图,而不是将节点添加到默认图(然后会随着时间的推移而增长,内存泄漏,并且每次使用它都需要更长的时间来启动).

With this change, each call to the function will create a new graph, rather than adding nodes the default graph (which would then grow over time, leaking memory, and taking longer to start each time you use it).

更高效的版本将使用 tf.placeholder() 作为文件名,构建单个图形,并在 TensorFlow 会话移动 for 循环.类似以下的东西会起作用:

A more efficient version would use a tf.placeholder() for the filename, construct a single graph, and move the for loop inside the TensorFlow session. Something like the following would work:

def read_tensors_from_image_files(file_names, input_height=299, input_width=299, input_mean=0, input_std=255):
   with tf.Graph().as_default():
    input_name = "file_reader"
    output_name = "normalized"
    file_name_placeholder = tf.placeholder(tf.string, shape=[])
    file_reader = tf.read_file(file_name_placeholder, input_name)
    image_reader = tf.image.decode_jpeg(file_reader, channels = 3,
        name='jpeg_reader')
    float_caster = tf.cast(image_reader, tf.float32)
    dims_expander = tf.expand_dims(float_caster, 0)
    resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
    normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])

    with tf.Session() as sess:
      for file_name in file_names:
        yield sess.run(normalized, {file_name_placeholder: file_name})

这篇关于TensorFlow 中的图像转换随着时间的推移而变慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆