具有TensorRT的C ++ Tensorflow API [英] C++ Tensorflow API with TensorRT

查看:620
本文介绍了具有TensorRT的C ++ Tensorflow API的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标是在C ++应用程序中运行经过tensorrt优化的tensorflow图.我将tensorflow 1.8与tensorrt 4一起使用.使用python API,我能够优化图形并看到不错的性能提升.

My goal is to run a tensorrt optimized tensorflow graph in a C++ application. I am using tensorflow 1.8 with tensorrt 4. Using the python api I am able to optimize the graph and see a nice performance increase.

尝试在c ++中运行图形失败,并显示以下错误:

Trying to run the graph in c++ fails with the following error:

Not found: Op type not registered 'TRTEngineOp' in binary running on e15ff5301262. Make sure the Op and Kernel are registered in the binary running in this process.

其他非张量图起作用.我在python api上遇到了类似的错误,但是通过导入tensorflow.contrib.tensorrt解决了它.从错误中我可以确定内核和op没有注册,但是在构建tensorflow之后不知道如何在应用程序中注册.附带一提,我不能使用bazel,但必须使用cmake.到目前为止,我链接到libtensorflow_cc.solibtensorflow_framework.so.

Other, non tensorrt graphs work. I had a similar error with the python api, but solved it by importing tensorflow.contrib.tensorrt. From the error I am fairly certain the kernel and op are not registered, but am unaware on how to do so in the application after tensorflow has been built. On a side note I can not use bazel but am required to use cmake. So far I link against libtensorflow_cc.so and libtensorflow_framework.so.

有人可以在这里帮助我吗?谢谢!

Can anyone help me here? thanks!

更新: 使用c或c ++ api加载_trt_engine_op.so不会在加载时引发错误,但无法与

Update: Using the c or c++ api to load _trt_engine_op.so does not throw an error while loading, but fails to run with

Invalid argument: No OpKernel was registered to support Op 'TRTEngineOp' with these attrs.  Registered devices: [CPU,GPU], Registered kernels:
  <no registered kernels>

     [[Node: my_trt_op3 = TRTEngineOp[InT=[DT_FLOAT, DT_FLOAT], OutT=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], input_nodes=["tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer", "tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer"], output_nodes=["tower_0/down_0/conv_skip/Relu", "tower_0/down_1/conv_skip/Relu", "tower_0/down_2/conv_skip/Relu", "tower_0/down_3/conv_skip/Relu"], serialized_engine="\220{I\000...00\000\000"](tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer)]]

推荐答案

在Tensorflow 1.8上解决错误未找到:Op类型未注册'TRTEngineOp'"的另一种方法:

Another way to solve the problem with the error "Not found: Op type not registered 'TRTEngineOp'" on Tensorflow 1.8:

1)在文件tensorflow/contrib/tensorrt/BUILD中,添加具有以下内容的新部分:

1) In the file tensorflow/contrib/tensorrt/BUILD, add new section with following content :

cc_library(
name = "trt_engine_op_kernel_cc",
srcs = [
    "kernels/trt_calib_op.cc",
    "kernels/trt_engine_op.cc",
    "ops/trt_calib_op.cc",
    "ops/trt_engine_op.cc",
    "shape_fn/trt_shfn.cc",
],
hdrs = [
    "kernels/trt_calib_op.h",
    "kernels/trt_engine_op.h",
    "shape_fn/trt_shfn.h",
],
copts = tf_copts(),
visibility = ["//visibility:public"],
deps = [
    ":trt_logging",
    ":trt_plugins",
    ":trt_resources",
    "//tensorflow/core:gpu_headers_lib",
    "//tensorflow/core:lib_proto_parsing",
    "//tensorflow/core:stream_executor_headers_lib",
] + if_tensorrt([
    "@local_config_tensorrt//:nv_infer",
]) + tf_custom_op_library_additional_deps(),
alwayslink = 1,  # buildozer: disable=alwayslink-with-hdrs
)

2)将//tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc作为依赖项添加到要构建的相应BAZEL项目中

2) Add //tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc as dependency to the corresponding BAZEL project you want to build

PS:无需使用TF_LoadLibrary加载库_trt_engine_op.so

PS: No need to load library _trt_engine_op.so with TF_LoadLibrary

这篇关于具有TensorRT的C ++ Tensorflow API的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆