在新线程中调用启用CUDA的库 [英] Calling a CUDA-enabled library in a new thread

查看:297
本文介绍了在新线程中调用启用CUDA的库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一些代码,我已经编写并放入它自己的库,使用CUDA在GPU上做一些处理。

I have some code that i have written and put into it's own library that uses CUDA to do some processing on the GPU.

我正在构建一个GUI前端,结束使用Qt,并且作为加载GUI的一部分,我调用

I am building a GUI front-end using Qt, and as part of loading the GUI, I call

CUresult res;
CUdevice dev;
CUcontext ctx;

    cuInit(0);
    cuDeviceGet(dev,0);
    cuCtxCreate(ctx, 0, dev);

来初始化GPU,以便应用程序在调用CUDA-

to go ahead and initialize the GPU, so that the application is responsive as possible when calling the CUDA-enabled library.

问题是,我现在开始尝试从不同的线程调用我的支持CUDA的库。

The problem is, I have now started trying to call my CUDA-enabled library from a different thread.

我必须做一些努力吗?那个其他线程是只调用任何cuda函数(除了主线程调用cuInit()),但我的代码是在我的cuda库中的cudaFree()调用崩溃。

Do I have to make some kind of effort to do this? That other thread is the ONLY one calling any cuda functions (except for the main thread calling cuInit()), but my code is crashing on a cudaFree() call in my cuda library.

感谢

推荐答案

上下文绑定到创建它们的线程。所以你的两个选择是让GPU工作线程建立上下文,或者使用驱动程序API上下文迁移调用( cuCtxPopCurrent cuCtxPushCurrent )将上下文从线程移动到线程。请注意,上下文迁移不是免费的,所以如果你打算做很多,你会注意到GPU延迟的增加。

Contexts are tied to the thread that created them. So your two choices are either to have the GPU "worker thread" establish the context, or use the driver API context migration calls (cuCtxPopCurrent and cuCtxPushCurrent) to move the context from thread to thread. Be aware that context migration isn't free, so if you are going to do it a lot, you will notice an increase in GPU latency.

这篇关于在新线程中调用启用CUDA的库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆