AMD的OpenCL是否提供类似于CUDA的GPUDirect? [英] Does AMD's OpenCL offer something similar to CUDA's GPUDirect?

查看:895
本文介绍了AMD的OpenCL是否提供类似于CUDA的GPUDirect?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

NVIDIA提供 GPUDirect 以减少内存传输开销。我想知道是否有类似的概念为AMD / ATI?具体来说:

NVIDIA offers GPUDirect to reduce memory transfer overheads. I'm wondering if there is a similar concept for AMD/ATI? Specifically:

1)AMD GPU在与网卡接口时避免第二次内存传输,。在图形丢失的情况下,这里是描述GPUDirect从一台机器上的GPU获取数据通过网络接口传输的影响:使用GPUDirect,GPU内存到主机内存,然后直接到网络接口卡。没有GPUDirect,GPU内存在一个地址空间到主机内存,然后CPU必须做一个副本以获得内存到另一个主机内存地址空间,然后它可以出去到网卡。

1) Do AMD GPUs avoid the second memory transfer when interfacing with network cards, as described here. In case the graphic is lost at some point, here is a description of the impact of GPUDirect on getting data from a GPU on one machine to be transferred across a network interface: With GPUDirect, GPU memory goes to Host memory then straight to the network interface card. Without GPUDirect, GPU memory goes to Host memory in one address space, then the CPU has to do a copy to get the memory into another Host memory address space, then it can go out to the network card.

2)当两个GPU在同一PCIe总线上共享时,AMD GPU允许P2P内存传输,,如此处所述。在图形丢失的情况下,这里描述GPUDirect在相同PCIe总线上的GPU之间传输数据的影响:使用GPUDirect,数据可以在相同PCIe总线上的GPU之间直接移动,而不触摸主机内存。没有GPUDirect,数据总是必须返回到主机,然后才能到达另一个GPU,而不管GPU位于哪里。

2) Do AMD GPUs allow P2P memory transfers when two GPUs are shared on the same PCIe bus, as described here. In case the graphic is lost at some point, here is a description of the impact of GPUDirect on transferring data between GPUs on the same PCIe bus: With GPUDirect, data can move directly between GPUs on the same PCIe bus, without touching host memory. Without GPUDirect, data always has to go back to the host before it can get to another GPU, regardless of where that GPU is located.

编辑:BTW,I'm不完全确定GPUDirect是多少蒸汽,以及多少是实际有用的。我从来没有听说过一个GPU程序员把它用于真正的东西。

BTW, I'm not entirely sure how much of GPUDirect is vaporware and how much of it is actually useful. I've never actually heard of a GPU programmer using it for something real. Thoughts on this are welcome too.

推荐答案

虽然OpenCL规范声明这个标志该标志指定应用程序想要OpenCL实现从主机可访问存储器分配存储器,但是不确定AMD的实现(或其他实现)可能做什么。

I think you may be looking for the CL_MEM_ALLOC_HOST_PTR flag in clCreateBuffer. While the OpenCL specification states that this flag "This flag specifies that the application wants the OpenCL implementation to allocate memory from host accessible memory", it is uncertain what AMD's implementation (or other implementations) might do with it.

以下是有关主题 http://www.khronos.org/message_boards/viewtopic.php?f=28&t=2440

希望这有帮助。

编辑:我知道nVidia的OpenCL SDK在固定/页面锁定内存中实现这​​种分配。我相当肯定这是AMD的OpenCL SDK在GPU上运行时所做的。

I do know that nVidia's OpenCL SDK implements this as allocation in pinned/page-locked memory. I am fairly certain this is what AMD's OpenCL SDK does when running on the GPU.

这篇关于AMD的OpenCL是否提供类似于CUDA的GPUDirect?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆