CUDA跨I / O集线器的对等 [英] CUDA Peer-to-Peer across I/O hubs

查看:154
本文介绍了CUDA跨I / O集线器的对等的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否有SBIOS条目或其他配置更改,将使连接I / O集线器的QPI链路(或插槽,在集成I / O集线器的CPU的情况下) - Sandy Bridge和更高)?

解决方案

否。 QPI链路具有不完全覆盖PCIE协议的所有特征的协议,特别是P2P协议使用的一些特征。



特定差异记录在英特尔数据表此处



IOH不支持用于远程对等MMIO事务的PCI Express的非连续字节使能。这是对PCI Express标准要求的额外限制,以防止与Intel QuickPath互连不兼容。(第135页)

因此,P2P需要两个设备之间的连续PCIE结构。两个设备都需要在同一个PCIE根联合体上。当 GPUDirect v2.0(对等)时,NVIDIA在CUDA 4.0时间范围内公布了此特定要求>首次推出。


Is there an SBIOS entry or other configuration change that will enable peer-to-peer to work for CUDA across the QPI links that connect I/O hubs (or sockets, in the case of CPUs that integrate the I/O hub - Sandy Bridge and higher)?

解决方案

No. The QPI link has a protocol which does not entirely cover all features of the PCIE protocol, and in particular some features used by the P2P protocol.

A specific difference is documented in an intel datasheet here.

"The IOH does not support non-contiguous byte enables from PCI Express for remote peer-to-peer MMIO transactions. This is an additional restriction over the PCI Express standard requirements to prevent incompatibility with Intel QuickPath Interconnect." (page 135)

So P2P requires a continuous PCIE fabric between the two devices. Both devices need to be on the same PCIE root complex. This particular requirement was publicized by NVIDIA in the CUDA 4.0 timeframe when GPUDirect v2.0 (Peer-to-Peer) was first introduced.

这篇关于CUDA跨I / O集线器的对等的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆