在proto文件中看到float32的重量 [英] Seeing the float32 weight in a proto file

查看:662
本文介绍了在proto文件中看到float32的重量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经将Google Inception训练过的模型 $ b A
mixed_9 / join / concat_dimConst *

dtype0 *
值:
A
mixed_8 / join / concat_dimConst *

dtype0 *
value:
A
mixed_7 / join / concat_dimConst *

dtype0 *
value:
A
mixed_6 / join / concat_dimConst *

使用Google Protobuf - decode_raw 从stdin读取。现在,输出读取为 .proto 文件,包括图层的名称和一些编码的数字。这是 .proto 文件的前30行:

  syntax = proto2; 
1 {
1:mixed_10 / join / concat_dim
2:Const
5 {
1:dtype
2 {
6:3
}
}
5 {
1:价值
2 {
8 {
1:3
2:
7:\003
}
}
}
1 {
1:mixed_9 / join / concat_dim
2:Const
5 {
1:dtype
2 {
6:3
}
}

解析文件时,我正在寻找初始模型的训练权重,例如在这种情况下:

  1 {
1:Mul
2 {
10:108
12:0x7265646c6f686563
}
5 {
1:dtype
2 {
6:1
}
}
5 {
1:shape
2 {
7:
}
}
}

另一方面,使用一个小型的python脚本,我可以打印所有张量的初始mod el:

 从tensorflow.python.platform导入tensorflow作为tf 
导入gfile

INCEPTION_LOG_DIR ='/ tmp / inception_v3_log'
$ b $如果不是os.path.exists(INCEPTION_LOG_DIR):
os.makedirs(INCEPTION_LOG_DIR)
with tf.Session()as sess:
model_filename ='./model/tensorflow_inception_v3_stripped_optimized_quantized.pb'
with gfile.FastGFile(model_filename,'rb')as f:
graph_def = tf.GraphDef()
graph_def。 ParseFromString(f.read())
_ = tf.import_graph_def(graph_def,name ='')
pprint([out for op in tf.get_default_graph()。get_operations()if op.type!如果out.dtype == tf.float32])



<我已经生成了该模型的所有图层。所以, Mul 图层对应于我的Python脚本输出的中间行:

 (< tf.Tensor'mixed / join / concat_dim:0'shape =()dtype = int32> ;,)
(< tf.Tensor'Mul:0'shape =< unknown> ;
(< tf.Tensor'conv / conv2d_params_quint8_const:0'shape =(3,3,3,32)dtype = quint8> ;,)


我在 .proto protoc v3.3 c> file,但我收到一个错误:

  $ protoc inception.proto.utf --print_free_field_numbers 
以前。 proto.utf:2:1:预期的顶级语句(例如消息)。

任何帮助,将不胜感激。

Ps:inception_model的 .pb 文件可用这里。除非你的模型没有任何变量(训练过的模型参数),或者在导出之前它们已经被转换成了常量,您还需要从单独的检查点文件加载变量值。他们也可能很难加载,因为从我了解.pb文件不保存收集变量保存时。 MetaGraphDef 是由于这个原因而创建的,所以您很有可能会更好地寻找相关的一个。



如果你的模型确实没有任何变量,你应该能够通过在加载图形def后运行会话来获得该图层的值。



< pre $ session.run('Mul:0')

如果模型包含占位符,则可能必须使用 feed_dict

注意:这些将不是权重的层,但乘法的结果。


I have converted the Google Inception trained model .pb file which reads like bellow:

A
mixed_9/join/concat_dimConst*

dtype0*
value   :
A
mixed_8/join/concat_dimConst*

dtype0*
value   :
A
mixed_7/join/concat_dimConst*

dtype0*
value   :
A
mixed_6/join/concat_dimConst*

using Google Protobuf --decode_raw which reads from stdin. Now, the output reads as .proto file including the name of the layers and some encoded numbers. Here is the first 30 lines of .proto file:

syntax="proto2";
1 {
  1: "mixed_10/join/concat_dim"
  2: "Const"
  5 {
    1: "dtype"
    2 {
      6: 3
    }
  }
  5 {
    1: "value"
    2 {
      8 {
        1: 3
        2: ""
        7: "\003"
      }
    }
  }
1 {
  1: "mixed_9/join/concat_dim"
  2: "Const"
  5 {
    1: "dtype"
    2 {
      6: 3
    }
  }

Parsing the file, I am looking for the trained weights of inception model, for instance in this case:

1 {
  1: "Mul"
  2 {
    10: 108
    12: 0x7265646c6f686563
  }
  5 {
    1: "dtype"
    2 {
      6: 1
    }
  }
  5 {
    1: "shape"
    2 {
      7: ""
    }
  }
}

On the other hand, using a small python script I could print out all the tensors in the inception model:

import tensorflow as tf
from tensorflow.python.platform import gfile

INCEPTION_LOG_DIR = '/tmp/inception_v3_log'

if not os.path.exists(INCEPTION_LOG_DIR):
    os.makedirs(INCEPTION_LOG_DIR)
with tf.Session() as sess:
    model_filename = './model/tensorflow_inception_v3_stripped_optimized_quantized.pb'
    with gfile.FastGFile(model_filename, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _= tf.import_graph_def(graph_def,name='')
 pprint([out for op in tf.get_default_graph().get_operations() if op.type != 'Placeholder' for out in op.values() if out.dtype == tf.float32])                                                                                             

I have generated all the layers of that model. So, that Mul layer corresponds to the middle line of the output of my Python script:

(<tf.Tensor 'mixed/join/concat_dim:0' shape=() dtype=int32>,)
(<tf.Tensor 'Mul:0' shape=<unknown> dtype=float32>,)
(<tf.Tensor 'conv/conv2d_params_quint8_const:0' shape=(3, 3, 3, 32) dtype=quint8>,)

My issue is that I don't find a way to read these float32 values which I assume are the weights for each layer.

I have tried protocv3.3 on my .proto file but I am receiving an error:

$ protoc inception.proto.utf --print_free_field_numbers
inception.proto.utf:2:1: Expected top-level statement (e.g. "message").

Any help would be appreciated.

P.s: The .pb file of the inception_model is available here.

解决方案

Unless your model doesn't have any variables (trained model parameters), or they have already been converted to constants before export, you'll also need to load variable values from a separate checkpoint file. They also mayb be difficult to load in because from what I understand .pb files don't save the collections variables were in when saved. MetaGraphDefs were created for this reason, and there's a good chance you'll be better off looking for a relevant one of these.

If your model truly doesn't have any variables, you should be able to get the values of that layer by running the session after loading the graph def.

session.run('Mul:0')

You may have to use a feed_dict if the model has placeholders.

Note: these won't be the weights of the layer, but the result of the multiplication.

这篇关于在proto文件中看到float32的重量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆