将Facenet模型.pb文件转换为TFLITE格式时出错 [英] Error converting Facenet model .pb file to TFLITE format

查看:283
本文介绍了将Facenet模型.pb文件转换为TFLITE格式时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正尝试根据从 David Sandbergs Github获得的Inception ResNet转换经过预训练的冻结.pb. 在Ubuntu上使用Tensorflow Lite Converter,使用以下命令:

i'm trying to convert a pre-trained frozen .pb based on Inception ResNet i got from David Sandbergs Github with the Tensorflow Lite Converter on Ubuntu using the following command:

/home/nils/.local/bin/tflite_convert
--output_file=/home/nils/Documents/frozen.tflite
--graph_def_file=/home/nils/Documents/20180402-114759/20180402-114759.pb 
--input_arrays=input 
--output_arrays=embeddings 
--input_shapes=1,160,160,3

但是,出现以下错误:

2018-12-03 15:03:16.807431: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Traceback (most recent call last):
File "/home/nils/.local/bin/tflite_convert", line 11, in <module>
sys.exit(main())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
_convert_model(tflite_flags)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 162, in _convert_model
output_data = converter.convert()
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 453, in convert
**converter_kwargs)
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 342, in toco_convert_impl
input_data.SerializeToString())
File "/home/nils/.local/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos
(stdout, stderr))
RuntimeError: TOCO failed see console for info.
b'2018-12-03 15:03:26.006252: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: FIFOQueueV2\n2018-12-03 15:03:26.006322: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join/fifo_queue\n2018-12-03 15:03:26.006339: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1080] Converting unsupported operation: QueueDequeueUpToV2\n2018-12-03 15:03:26.006352: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1127] Op node missing output type attribute: batch_join\n2018-12-03 15:03:27.496676: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 5601 operators, 9399 arrays (0 quantized)\n2018-12-03 15:03:28.603936: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.418074: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 3578 operators, 6254 arrays (0 quantized)\n2018-12-03 15:03:29.420354: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:42] 
Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.\nAborted (core dumped)\n'
None

如果我做对了,这可能是由于两个不受支持的Ops,QueueDequeueUpToV2和FIFOQueueV2,但是我不确定. 您有任何想法可能是什么问题,或者我该如何解决此错误?该错误甚至意味着什么?我希望此模型在移动android设备上运行,还有其他选择吗? 版本: Tensorflow V1.12 的Python 3.6.7 Ubuntu 18.04.1 LTS 在VirtualBox上 预先感谢!

If i get this right, this might be because of two unsupported Ops, QueueDequeueUpToV2 and FIFOQueueV2, but i don't know for sure. Do you have any ideas what might be the problem or how i can solve this error? What does that error even mean? I want this model to run on a mobile android device, are there any alternatives? Versions: Tensorflow V1.12 Python 3.6.7 Ubuntu 18.04.1 LTS on a VirtualBox Thanks in advance!

推荐答案

我不满意@ milind-deore的建议. 该模型的确减少到了23 MB,但嵌入似乎已损坏.

I had no luck with @milind-deore's suggestions. The model does reduce to 23 MB but the embeedings seems to be broken.

我发现了另一种方式:TF-> Keras-> TF Lite

I found an alternative way: TF -> Keras -> TF Lite

David Sandberg的FaceNet实现可以转换为TensorFlow Lite,首先将从TensorFlow转换为Keras,然后从Keras转换为TensorFlow Lite .

David Sandberg's FaceNet implementation can be converted to TensorFlow Lite, first converting from TensorFlow to Keras, and then from Keras to TensorFlow Lite.

我创建了此Google Colab 来完成转换. 大部分代码是

I created this Google Colab that does the conversion. Most part of the code was taken from here.

它的作用如下:

  1. 下载 Hiroki Taniai的Keras FaceNet实现
  2. 使用修补了我的inception_resnet_v1.py文件版本(确实会向模型添加额外的一层,以将归一化的嵌入作为输出)
  3. 此处,然后将其解压缩
  4. 从检查点文件中提取张量,并将权重写入磁盘上的numpy数组,从而映射每个对应层的名称.
  5. 使用随机权重创建一个新的Keras模型(重要:使用512个类).
  6. 写出从numpy数组中读取的每个对应层的权重.
  7. 以Keras格式.h5存储模型
  8. 使用命令"tflite_convert"将Keras转换为TensorFlow Lite.

  1. Download Hiroki Taniai's Keras FaceNet implementation
  2. Override the inception_resnet_v1.py file with my patched version (which does adds an extra layer to the model to have normalized embeedings as output)
  3. Download Sandberg's pre-trained model (20180402-114759) from here, and unzips it
  4. Extract the tensors from the checkpoint file and writes the weights to numpy arrays on disk, mapping the name of each corresponding layer.
  5. Create a new Keras model with random weights (Important: using 512 classes).
  6. Write the weights for each corresponding layer reading from the numpy arrays.
  7. Store the model with the Keras format .h5
  8. Convert Keras to TensorFlow Lite using the command "tflite_convert".

tflite_convert --post_training_quantize --output_file facenet.tflite --keras_model_file /content/keras-facenet/model/keras/model/facenet_keras.h5

还在我的Colab中,我提供了一些代码来表明转换效果很好,并且TFLite模型确实可以工作.

Also in my Colab I provide some code to show that the conversion is good, and the TFLite model does work.

distance bill vs bill 0.7266881 distance bill vs larry 1.2134411

distance bill vs bill 0.7266881 distance bill vs larry 1.2134411

因此,即使我没有对齐人脸,大约1.2的阈值也将很容易识别.

So even though I'm not aligning the faces, a threshold of about 1.2 would be good to the recognition.

希望有帮助!

这篇关于将Facenet模型.pb文件转换为TFLITE格式时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆