GPUDirect Peer 2 peer使用PCIe总线:如果我需要访问其他GPU上的太多数据,它不会导致死锁吗? [英] GPUDirect Peer 2 peer using PCIe bus: If I need to access too much data on other GPU, will it not result in deadlocks?

查看:511
本文介绍了GPUDirect Peer 2 peer使用PCIe总线:如果我需要访问其他GPU上的太多数据,它不会导致死锁吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有模拟程序,需要大量的数据。
我在GPU中加载数据进行计算,并且在数据中有很多依赖性。
由于1 GPU不足以用于数据,所以我将其升级到2 GPU。
但是限制是,如果我需要其他GPU上的数据,则必须首先复制到主机。

I have simulation program which requires a lot of data. I load the data in the GPUs for calculation and there is a lot of dependency in the data. Since 1 GPU was not enough for the data, so I upgraded it to 2 GPUs. but the limitation was, if I required data on other GPU, there had to be a copy to host first.

因此,如果我使用GPU Direct P2P ,PCI总线会处理GPU之间的往返通信吗?不会导致死锁吗?

So, if I use GPU Direct P2P, will the PCI bus handle that much of to and fro communication between the GPUs? Wont it result in deadlocks?

我是新的,所以需要一些帮助和洞察力。

I am new to this, so need some help and insight.

推荐答案

PCI Express在两个方向都具有全速。在进行之前,在一个需要握手的同步MPI通信中,不应该有死锁。

PCI Express has full speed in both directions. There should be no "deadlock" like you may experience in a synchronous MPI communication that needs handshaking before proceeding.

正如罗伯特在注释中提到的通过PCIE总线访问数据比从板载存储器访问它慢得多。然而,它应该明显比从GPU1传输数据到CPU,然后从CPU到GPU2,因为你不必复制它两次。

As Robert mentioned in a comment "accessing data over PCIE bus is a lot slower than accessing it from on-board memory". However, it should be significantly faster than transferring data from GPU1 to CPU, then from CPU to GPU2 since you don't have to copy it twice.

你应该尝试最小化GPU到GPU传输的数量,特别是如果你必须在执行之前同步数据(可能在一些算法中发生)。但是,您也可以尝试在传输数据时执行一些并行执行。您可以查看CUDA C指南的对等内存部分。
http:// docs.nvidia.com/cuda/cuda-c-programming-guide/#peer-to-peer-memory-copy

You should try to minimize the amount of GPU to GPU transfers, especially if you have to sync data before you do it (could happen in some algorithms). However, you could also try to do some concurrent execution while transferring data. You can look at the Peer-to-Peer memory section of the CUDA C guide. http://docs.nvidia.com/cuda/cuda-c-programming-guide/#peer-to-peer-memory-copy

这篇关于GPUDirect Peer 2 peer使用PCIe总线:如果我需要访问其他GPU上的太多数据,它不会导致死锁吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆