动态编辑用于 Tensorflow 对象检测的管道配置 [英] Dynamically Editing Pipeline Config for Tensorflow Object Detection

查看:70
本文介绍了动态编辑用于 Tensorflow 对象检测的管道配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 tensorflow 对象检测 API,并且我希望能够在 python 中动态编辑配置文件,如下所示.我想过在 python 中使用协议缓冲区库,但我不知道如何去做.

I'm using tensorflow object detection API, and I want to be able to edit config file dynamically in python, which looks like this. I thought of using protocol buffers library in python, but I'm not sure how to go about.

model {
ssd {
num_classes: 1
image_resizer {
  fixed_shape_resizer {
    height: 300
    width: 300
  }
}
feature_extractor {
  type: "ssd_inception_v2"
  depth_multiplier: 1.0
  min_depth: 16
  conv_hyperparams {
    regularizer {
      l2_regularizer {
        weight: 3.99999989895e-05
      }
    }
    initializer {
      truncated_normal_initializer {
        mean: 0.0
        stddev: 0.0299999993294
      }
    }
    activation: RELU_6
    batch_norm {
      decay: 0.999700009823
      center: true
      scale: true
      epsilon: 0.0010000000475
      train: true
    }
  }
 ...
 ...

}

是否有一种简单/简单的方法可以将 image_resizer -> fixed_shape_resizer 中的高度等字段的特定值从 300 更改为 500?并用修改后的值写回文件而不更改任何其他内容?

Is there a simple/easy way to change specific values for fields like height in image_resizer -> fixed_shape_resizer from say 300 to 500? And write back the file with modified values without changing anything else?

尽管@DmytroPrylipko 提供的答案适用于配置中的大多数参数,但我在复合字段"方面遇到了一些问题..

Though answer provided by @DmytroPrylipko worked for most of the parameters in the config, I face some issues with "composite field"..

也就是说,如果我们有这样的配置:

That is, if we have configuration like:

train_input_reader: {
  label_map_path: "/tensorflow/data/label_map.pbtxt"
  tf_record_input_reader {
    input_path: "/tensorflow/models/data/train.record"
  }
}

我添加这一行来编辑 input_path:

And I add this line to edit input_path:

 pipeline_config.train_input_reader.tf_record_input_reader.input_path = "/tensorflow/models/data/train100.record"

它抛出错误:

TypeError: Can't set composite field

推荐答案

是的,使用 Protobuf Python API 非常简单:

Yes, using Protobuf Python API is quite easy:

edit_pipeline.py:

import argparse

import tensorflow as tf
from google.protobuf import text_format
from object_detection.protos import pipeline_pb2


def parse_arguments():                                                                                                                                                                                                                                                
    parser = argparse.ArgumentParser(description='')                                                                                                                                                                                                                  
    parser.add_argument('pipeline')                                                                                                                                                                                                                                   
    parser.add_argument('output')                                                                                                                                                                                                                                     
    return parser.parse_args()                                                                                                                                                                                                                                        


def main():                                                                                                                                                                                                                                                           
    args = parse_arguments()                                                                                                                                                                                                                                          
    pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()                                                                                                                                                                                                          

    with tf.gfile.GFile(args.pipeline, "r") as f:                                                                                                                                                                                                                     
        proto_str = f.read()                                                                                                                                                                                                                                          
        text_format.Merge(proto_str, pipeline_config)                                                                                                                                                                                                                 

    pipeline_config.model.ssd.image_resizer.fixed_shape_resizer.height = 300                                                                                                                                                                                          
    pipeline_config.model.ssd.image_resizer.fixed_shape_resizer.width = 300                                                                                                                                                                                           

    config_text = text_format.MessageToString(pipeline_config)                                                                                                                                                                                                        
    with tf.gfile.Open(args.output, "wb") as f:                                                                                                                                                                                                                       
        f.write(config_text)                                                                                                                                                                                                                                          


if __name__ == '__main__':                                                                                                                                                                                                                                            
    main() 

我调用脚本的方式:

TOOL_DIR=tool/tf-models/research

(
   cd $TOOL_DIR
   protoc object_detection/protos/*.proto --python_out=.
)

export PYTHONPATH=$PYTHONPATH:$TOOL_DIR:$TOOL_DIR/slim

python3 edit_pipeline.py pipeline.config pipeline_new.config

复合字段

如果有重复的字段,你必须把它们当作数组(例如使用extend()append()方法):

In case of repeated fields, you must treat them as arrays (e.g. use extend(), append() methods):

pipeline_config.train_input_reader.tf_record_input_reader.input_path[0] = '/tensorflow/models/data/train100.record'

Eval 输入阅读器错误

这是尝试编辑复合字段的常见错误.(没有找到属性 tf_record_input_reader"在 eval_input_reader 的情况下)

This is a common error trying to edit the composite field. ( "no attribute tf_record_input_reader found" in case of eval_input_reader )

@latida 的回答中提到了这一点.通过将其设置为数组字段来解决此问题.

It's mentioned below in @latida's answer. Fix that by setting it as an array field.

pipeline_config.eval_input_reader[0].label_map_path  = label_map_full_path
pipeline_config.eval_input_reader[0].tf_record_input_reader.input_path[0] = val_record_path

这篇关于动态编辑用于 Tensorflow 对象检测的管道配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆