加载并运行测试 .trt 模型 [英] Load and run test a .trt model

查看:120
本文介绍了加载并运行测试 .trt 模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要在 NVIDIA JETSON T2 中运行我的模型,所以我将我的工作 yoloV3 模型转换为 tensorRT(.trt 格式)(https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d) 提到的这个链接帮助我将 Yolo 模型转换为 .trt .但是在将模型转换为 .trt 模型后,我需要测试它是否工作正常(即)如果检测足够好.我找不到任何用于加载和测试 .trt 模型的示例代码.如果有人可以帮助我,请在答案部分或任何链接中提取示例代码以供参考.

I need to run my model in NVIDIA JETSON T2, So I converted my working yoloV3 model into tensorRT(.trt format)(https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d)This link mentioned helped me to convert the Yolo model into .trt .But after converting the model to .trt model I needed to test if it works fine (i.e) If the detection is good enough. I couldn't find any sample code for loading and testing .trt model. If anybody can help me , please pull up a sample code in the answer section or any link for reference.

推荐答案

您可以使用此代码段加载和执行 TRT 模型 的推理.这是在 Tensorflow 2.1.0Google Colab 环境中执行的.

You can load and perform the inference of your TRT Model using this snippet of code. This is executed in Tensorflow 2.1.0 and Google Colab Environment.

from tensorflow.python.compiler.tensorrt import trt_convert as trt
from tensorflow.python.saved_model import tag_constants

saved_model_loaded = tf.saved_model.load(output_saved_model_dir, tags=[tag_constants.SERVING])
signature_keys = list(saved_model_loaded.signatures.keys())
print(signature_keys) # Outputs : ['serving_default']

graph_func = saved_model_loaded.signatures[signature_keys[0]]
graph_func(x_test) # Use this to perform inference

output_saved_model_dirSaveModel 格式的 TensorRT 优化模型 的位置.

output_saved_model_dir is the location of your TensorRT Optimized model in SavedModel format.

从这里,您可以添加测试方法来确定预处理后处理模型的性能.

From here, you can add your testing methods to determine the performance of your pre and post-processed model.

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import numpy as np 

conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode="FP16")
conversion_params = conversion_params._replace(maximum_cached_engines=100)

converter = trt.TrtGraphConverterV2(
    input_saved_model_dir=input_saved_model_dir,
    conversion_params=conversion_params)

converter.convert()

converter.save(output_saved_model_dir)

以下是用于转换保存Tensorflow RT Optimized模型的代码.

Here are the codes used for Converting and Saving the Tensorflow RT Optimized model.

这篇关于加载并运行测试 .trt 模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆