如何在颤振应用程序中集成 yolo-v3 自定义对象检测器? [英] How to integrate yolo-v3 custom object detector in flutter app?

查看:37
本文介绍了如何在颤振应用程序中集成 yolo-v3 自定义对象检测器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我开发了 flutter 应用程序和 yolov3 自定义对象检测器.两个模块都是独立的.现在我想将这些模块组合到一个项目中,但无法弄清楚如何在我的颤振应用程序中使用该自定义对象检测器的训练权重.任何人都可以帮助我进行这种集成吗?

I developed flutter app and yolov3 custom object detector. Both modules are independent. Now I want to combine those modules into a single project but could not figure out how to use that trained weights of that custom object detector in my flutter app. Could anyone please help me with this integration?

推荐答案

我不知道你是在使用 Flutter 构建 android 应用还是 iOS.

I don't know whether using Flutter you're building android app or iOS.

无论如何,要能够在您的 Flutter 应用程序上使用自定义训练的 Yolov3 模型,请执行以下两个步骤.

Anyway to be able to use custom trained Yolov3 model on your Flutter app, follow these two steps.

1.首先需要将训练好的 yolov3 模型转换为 tflite 版本:

1. First you need to convert trained yolov3 model to tflite version:

您可以使用 this 存储库用于此目的.

You can use this repo for that purpose.

将自定义训练的 Yolov3 darknet 权重保存到 tflite 转换所需的 tfmodel:

Save custom trained Yolov3 darknet weights to tfmodel that's needed for tflite conversion:

python save_model.py --weights yolov3.weights --output ./checkpoints/yolov3-416 --input_size 416 --model yolov3 --framework tflite

Yolov3 模型转换为 tflite 版本:

Convert Yolov3 model to tflite version:

python convert_tflite.py --weights ./checkpoints/yolov3-416 --output ./checkpoints/yolov3-416.tflite

2.然后你使用 Flutter 插件来访问 TensorFlow-Lite API,它适用于 androidiOS - https://github.com/shaqian/flutter_tflite

2. Then you use Flutter plugin for accessing TensorFlow-Lite API, which works with both android and iOS - https://github.com/shaqian/flutter_tflite

a) 创建一个 assets 文件夹,并将您的标签文件和模型文件放在它.在 pubspec.yaml 中添加:

a) Create a assets folder and place your label file and model file in it. In pubspec.yaml add:

  assets:
   assets/labels.txt
   assets/yolov3-416.tflite

b) 导入库:

import 'package:tflite/tflite.dart';

c) 加载模型和标签:

c) Load the model and labels:

String res = await Tflite.loadModel(
model: "assets/yolov3-416.tflite",
labels: "assets/labels.txt",
numThreads: 1, // defaults to 1
isAsset: true, // defaults to true, set to false to load resources outside assets
useGpuDelegate: false // defaults to false, set to true to use GPU delegate
        );

d) 在图像上运行:

  var recognitions = await Tflite.detectObjectOnImage(
  path: filepath,       // required
  model: "YOLOv3",      
  imageMean: 0.0,       
  imageStd: 255.0,      
  threshold: 0.3,       // defaults to 0.1
  numResultsPerClass: 2,// defaults to 5
  anchors: anchors,     // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  blockSize: 32,        // defaults to 32
  numBoxesPerBlock: 5,  // defaults to 5
  asynch: true          // defaults to true
);

e) 释放资源:

await Tflite.close();

这篇关于如何在颤振应用程序中集成 yolo-v3 自定义对象检测器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆