是否可以在应用程序之间共享Cuda上下文? [英] Is it possible to share a Cuda context between applications?

查看:390
本文介绍了是否可以在应用程序之间共享Cuda上下文?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在两个独立的Linux进程之间传递一个Cuda上下文(使用我已经设置的POSIX消息队列)。

I'd like to pass a Cuda context between two independent Linux processes (using POSIX message queues, which I already have set up).

使用 cuCtxPopCurrent() cuCtxPushCurrent(),我可以得到上下文指针,但是这个指针在进程的内存中引用我调用函数,并在进程之间传递它是无意义的。

Using cuCtxPopCurrent() and cuCtxPushCurrent(), I can get the context pointer, but this pointer is referenced in the memory of the process in which I call the function, and passing it between processes is meaningless.

我正在寻找其他解决方案。我的想法到目前为止:

I'm looking for other solutions. My ideas so far are:


  1. 尝试深入复制 CUcontext

  2. 查看是否可以找到一个共享内存解决方案,其中所有的Cuda指针都放在那里,以便两个进程都可以访问它们。

  3. 将流程合并到一个程序中。

  4. 在Cuda 4.0中可能有更好的上下文共享,我可以切换到。

  1. Try to deep copy the CUcontext struct, and then pass the copy.
  2. See if I can find a shared-memory solution where all my Cuda pointers are placed there so both processes can access them.
  3. Merge the processes into one program.
  4. It is possible that there is better context sharing in Cuda 4.0, which I could switch to.

我不确定选项(1)是可能的,也不知道(2)是可用还是可能。 (3)不是真的一个选项,如果我想使东西通用(这是在劫机垫片内)。 (4)我将看看Cuda 4.0,但我不知道它是否会在那里工作。

I'm not sure option (1) is possible, nor if (2) is available or possible. (3) isn't really an option if I want to make things generic (this is within a hijack shim). (4) I'll look at Cuda 4.0, but I'm not sure if it will work there, either.

谢谢!

推荐答案

总之,没有。上下文隐含地绑定到创建它们的线程和应用程序。在单独的应用程序之间没有可移植性。这与OpenGL和各种版本的Direct3D以及在应用程序之间共享内存是几乎相同的。

In a word, no. Contexts are implicitly tied to the thread and application that created them. There is no portability between separate applications. This pretty much the same with OpenGL and the various versions of Direct3D as well - sharing memory between applications isn't supported.

CUDA 4使API线程安全,单个主机线程可以同时保存多于1个上下文(即,多于1个GPU),并使用规范设备选择API来选择它正在使用的GPU。如果我正确地理解你的问题/应用程序,这将不会有帮助。

CUDA 4 makes the API thread safe, so that a single host thread can hold more than 1 context (ie. more than 1 GPU) simultaneously and use the canonical device selection API to choose which GPU it is working with. That won't help here, if I am understanding your question/application correctly.

这篇关于是否可以在应用程序之间共享Cuda上下文?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆