Mobilenet V1的训练后量化不起作用 [英] post training quantization for mobilenet V1 not working

查看:470
本文介绍了Mobilenet V1的训练后量化不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将mobilenet V1 .pb文件转换为量化的tflite文件.我使用以下命令进行量化:

I am trying to convert mobilenet V1 .pb file to quantized tflite file. I used the below command to do the quantization:

  tflite_convert \
  --output_file=/home/wc/users/Mostafiz/TPU/models/mobilnet/test2_4thSep/mobilenetv1_test5.tflite \
  --graph_def_file=/home/wc/users/Mostafiz/TPU/models/mobilnet/mobileNet_frozen_graph.pb \
  --output_format=TFLITE \
  --inference_type=QUANTIZED_UINT8 \
  --inference_input_type=QUANTIZED_UINT8 \
  --input_shape=1,224,224,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1 \
  --inference_output_type=QUANTIZED_UINT8 \
  --default_ranges_min=0 \
  --default_ranges_max=6 \
  --std_dev_values=127 \
  --mean_value=128

.tflile文件创建成功,没有任何错误.但是,当我尝试使用.tflile进行推理时,输出类混乱了.没有测试图像给出正确的结果.

The .tflile file is created without any error. But when I am trying to use the .tflile for inference the output classes are messed up. None of the test images are giving correct result.

不确定我在哪里做错了,有人可以帮我吗?

Not sure where I am doing wrong, can someone please help me?

为了进行推断,我使用了tensorflow提供的'label_image.py'.这是代码:

For inference I am using 'label_image.py' provided by tensorflow. here is the code:

"""label_image for tflite"""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import numpy as np

from PIL import Image

from tensorflow.lite.python import interpreter as interpreter_wrapper

def load_labels(filename):
  my_labels = []
  input_file = open(filename, 'r')
  for l in input_file:
    my_labels.append(l.strip())
  return my_labels

if __name__ == "__main__":
  floating_model = False

  parser = argparse.ArgumentParser()
  parser.add_argument("-i", "--image", default="/tmp/grace_hopper.bmp", \
    help="image to be classified")
  parser.add_argument("-m", "--model_file", \
    default="/tmp/mobilenet_v1_1.0_224_quant.tflite", \
    help=".tflite model to be executed")
  parser.add_argument("-l", "--label_file", default="/tmp/labels.txt", \
    help="name of file containing labels")
  parser.add_argument("--input_mean", default=127.5, help="input_mean")
  parser.add_argument("--input_std", default=127.5, \
    help="input standard deviation")
  args = parser.parse_args()

  interpreter = interpreter_wrapper.Interpreter(model_path=args.model_file)
  interpreter.allocate_tensors()

  input_details = interpreter.get_input_details()
  output_details = interpreter.get_output_details()

  # check the type of the input tensor
  if input_details[0]['dtype'] == np.float32:
    floating_model = True

  # NxHxWxC, H:1, W:2
  height = input_details[0]['shape'][1]
  width = input_details[0]['shape'][2]
  img = Image.open(args.image)
  img = img.resize((width, height))

  # add N dim
  input_data = np.expand_dims(img, axis=0)

  if floating_model:
    input_data = (np.float32(input_data) - args.input_mean) / args.input_std

  interpreter.set_tensor(input_details[0]['index'], input_data)

  interpreter.invoke()

  output_data = interpreter.get_tensor(output_details[0]['index'])
  results = np.squeeze(output_data)

  top_k = results.argsort()[-5:][::-1]
  labels = load_labels(args.label_file)
  for i in top_k:
    if floating_model:
      print('{0:08.6f}'.format(float(results[i]))+":", labels[i])
    else:
      print('{0:08.6f}'.format(float(results[i]/255.0))+":", labels[i])

谢谢.

推荐答案

虚拟量化可能无法正常工作,因为我们需要猜测激活函数的default_max和defual_min值.

The dummy quantization may not work properly as we need to guess the default_max and defual_min values for activation functions.

正如Sudarsh在评论中提到的,我们应该进行后期训练全整数量化,以将.pb转换为INT8 tflite文件.

As Sudarsh mentioned in the comment, we should do a post training full integer quantization to convert .pb to INT8 tflite file.

您可以点击此链接开始-此处

you can follow this link to start with - here

希望有帮助.

致谢.

这篇关于Mobilenet V1的训练后量化不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆