克服 Graphdef 在 tensorflow 中不能大于 2GB [英] overcome Graphdef cannot be larger than 2GB in tensorflow

查看:74
本文介绍了克服 Graphdef 在 tensorflow 中不能大于 2GB的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 tensorflow 的 imageNet 训练模型提取最后一个池化层的特征作为新图像数据集的表示向量.

I am using tensorflow's imageNet trained model to extract the last pooling layer's features as representation vectors for a new dataset of images.

该模型对新图像的预测如下:

The model as is predicts on a new image as follows:

python classify_image.py --image_file new_image.jpeg 

我编辑了 main 函数,以便我可以获取一个图像文件夹并一次返回对所有图像的预测,并将特征向量写入 csv 文件中.我是这样做的:

I edited the main function so that I can take a folder of images and return the prediction on all images at once and write the feature vectors in a csv file. Here is how I did that:

def main(_):
  maybe_download_and_extract()
  #image = (FLAGS.image_file if FLAGS.image_file else
  #         os.path.join(FLAGS.model_dir, 'cropped_panda.jpg'))
  #edit to take a directory of image files instead of a one file
  if FLAGS.data_folder:
    images_folder=FLAGS.data_folder
    list_of_images = os.listdir(images_folder)
  else: 
    raise ValueError("Please specify image folder")

  with open("feature_data.csv", "wb") as f:
    feature_writer = csv.writer(f, delimiter='|')

    for image in list_of_images:
      print(image) 
      current_features = run_inference_on_image(images_folder+"/"+image)
      feature_writer.writerow([image]+current_features)

它对大约 21 张图像运行良好,但随后因以下错误而崩溃:

It worked just fine for around 21 images but then crashed with the following error:

  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1912, in as_graph_def
    raise ValueError("GraphDef cannot be larger than 2GB.")
ValueError: GraphDef cannot be larger than 2GB.

我认为通过调用方法 run_inference_on_image(images_folder+"/"+image) 之前的图像数据将被覆盖以仅考虑新的图像数据,但似乎并非如此.如何解决这个问题?

I thought by calling the method run_inference_on_image(images_folder+"/"+image) the previous image data would be overwritten to only consider the new image data, which doesn't seem to be the case. How to resolve this issue?

推荐答案

这里的问题是每次调用 run_inference_on_image() adds 节点到同一个图中,最终超过最大尺寸.至少有两种方法可以解决此问题:

The problem here is that each call to run_inference_on_image() adds nodes to the same graph, which eventually exceeds the maximum size. There are at least two ways to fix this:

  1. 简单但缓慢的方法是为每次调用 run_inference_on_image() 使用不同的默认图表:

  1. The easy but slow way is to use a different default graph for each call to run_inference_on_image():

for image in list_of_images:
  # ...
  with tf.Graph().as_default():
    current_features = run_inference_on_image(images_folder+"/"+image)
  # ...

  • 更多参与但更有效的方法是修改 run_inference_on_image() 以在多个图像上运行.重新定位您的 for 循环以包围 sess.run() 调用,您将不再需要在每次调用时重建整个模型,这将使​​处理每个图像的速度更快.

  • The more involved but more efficient way is to modify run_inference_on_image() to run on multiple images. Relocate your for loop to surround this sess.run() call, and you will no longer have to reconstruct the entire model on each call, which should make processing each image much faster.

    这篇关于克服 Graphdef 在 tensorflow 中不能大于 2GB的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆