TensorRT多线程 [英] TensorRT multiple Threads

查看:1789
本文介绍了TensorRT多线程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过python API使用TensorRt。我试图在所有线程都使用Cuda上下文的多个线程中使用它(一切都在单个线程中工作正常)。我正在使用带有tensorrt:20.06-py3图像的docker和一个onnx模型以及Nvidia 1070 GPU。

I am trying to use TensorRt using the python API. I am trying to use it in multiple threads where the Cuda context is used with all the threads (everything works fine in a single thread). I am using docker with tensorrt:20.06-py3 image, and an onnx model, and Nvidia 1070 GPU.

应该允许使用多线程方法,如此处所述 TensorRT最佳做法

The multiple thread approach should be allowed, as mentioned here TensorRT Best Practices.

我在主线程中创建了上下文:

I created the context in the main thread:

cuda.init()
device = cuda.Device(0)
ctx = device.make_context()

我尝试了两种方法,首先在主线程中构建引擎并使用它在执行线程中。

I tried two methods, first to build the engine in the main thread and use it in the execution thread. This case gives this error.

[TensorRT] ERROR: ../rtSafe/cuda/caskConvolutionRunner.cpp (373) - Cask Error in checkCaskExecError<false>: 10 (Cask Convolution execution)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception

第二,我尝试在会给我这个错误的线程中构建模型:

Second, I tried to build the model in the thread it gives me this error:

pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?

当我调用'cuda.Stream()'

The error appears when I call 'cuda.Stream()'

I时出现错误确保可以在同一个Cuda上下文中并行运行多个Cuda流,但是我不知道该怎么做。

I am sure that I can run multiple Cuda streams in parallel under the same Cuda context, but I don't know how to do it.

推荐答案

我找到了解决方案。这个想法是创建一个普通的全局 ctx = device.make_context()然后在每个执行线程中执行以下操作:

I found a solution. The idea is to create a normal global ctx = device.make_context() Then in each execution thread do a:

ctx.push()
---
Execute Inference Code
---
ctx.pop()

源和完整示例的链接为此处

The link for the source and full sample is here

这篇关于TensorRT多线程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆