多个进程可以共享一个CUDA上下文吗? [英] Can multiple processes share one CUDA context?

查看:413
本文介绍了多个进程可以共享一个CUDA上下文吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

此问题是Jason R的评论的后续行动到Robert Crovellas在这个原始问题(一台设备有多个CUDA上下文-有什么意义吗?):

This question is a followup on Jason R's comment to Robert Crovellas answer on this original question ("Multiple CUDA contexts for one device - any sense?"):

当您说多个上下文不能同时运行时,这是 仅限于内核启动,还是指内存传输 也一样我一直在考虑多进程设计 使用IPC API在各个进程之间传输缓冲区的GPU. 这是否意味着一次只有效一个过程 独占访问整个GPU(而不仅仅是特定的SM)? [...]那怎么 在每个流中与异步排队的内核/副本相互作用 调度过程中的整个过程?

When you say that multiple contexts cannot run concurrently, is this limited to kernel launches only, or does it refer to memory transfers as well? I have been considering a multiprocess design all on the same GPU that uses the IPC API to transfer buffers from process to process. Does this mean that effectively, only one process at a time has exclusive access to the entire GPU (not just particular SMs)? [...] How does that interplay with asynchronously-queued kernels/copies on streams in each process as far as scheduling goes?

罗伯特·克罗维拉(Robert Crovella)建议在一个新问题中提出这个问题,但是它从来没有被搁置过,所以让我在这里这样做.

Robert Crovella suggested asking this in a new question but it never happed, so let me do this here.

推荐答案

多进程服务是Nvidia的另一种CUDA实现,它使多个进程使用相同的上下文.例如如果每个进程的内核自身不能完全填充整个GPU,则允许它们并行运行.

Multi-Process Service is an alternative CUDA implementation by Nvidia that makes multiple processes use the same context. This e.g. allows kernels from multiple processes to run in parallel if each of them does not fill the entire GPU by itself.

这篇关于多个进程可以共享一个CUDA上下文吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆