MPI通信复杂度 [英] MPI communication complexity

查看:135
本文介绍了MPI通信复杂度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在研究MPI中Quicksort的并行实现的通信复杂性,并且在书中找到了类似的东西:

I'm studying the communication complexity of a parallel implementation of Quicksort in MPI and I've found something like this in a book:

单个进程从其他p-1个进程中收集p个常规样本.由于传递的值相对较少,因此消息等待时间可能是此步骤的主要术语.因此,收集的通信复杂性是O(log p)"(O实际上是theta,p是处理器数).

"A single process gathers p regular samples from each of the other p-1 processes. Since relatively few values are being passed, message latency is likely to be the dominant term of this step. Hence the communication complexity of the gather is O(log p)" (O is actually a theta and p is the number of processors).

对广播消息进行相同的确认.

The same affirmation is made for the broadcast message.

为什么这些组通信复杂度为O(log p)?是因为通信是使用某种基于树的层次结构完成的?

Why are these group communications complexity O(log p)? Is it because the communication is done using some kind of tree-based hierarchy?

如果延迟不是主要因素,并且发送了大量数据怎么办?复杂度是否为O(n log(p)),其中n是要发送的数据大小除以可用带宽?

What if latency is not the dominant term and there's a lot of data being sent? Would the complexity be O(n log (p)) where n would be the size of the data being sent divided by the available bandwidth?

那么,MPI_Send()和MPI_Recv()的通信复杂度如何?

And, what about the communication complexity of an MPI_Send() and an MPI_Recv()?

提前谢谢!

推荐答案

是的,收集和散布是使用(取决于特定的MPI版本)来实现的,例如二叉树,超立方体,线性阵列或2D正方形网格.可以使用超多维数据集等实现全收集操作.

Yes, gather and scatter are implemented using (depending on the particular MPI release) for instance binomial trees, hypercube, linear array or 2D square mesh. An all-gather operations may be implemented using an hypercube and so on.

对于聚集或分散,令lambda为等待时间,β为带宽.然后需要log p步骤.假设您要发送n个整数,每个整数用4个字节表示.发送它们的时间是

For a gather or scatter, let lambda be the latency and beta the bandwidth. Then log p steps are required. Suppose you are sending n integers each represented using 4 bytes. The time to send them is

当n = O(1)时为O(log p),否则为O(log p + n).对于广播,所需时间为

This is O(log p) when n = O(1) and O(log p + n) otherwise. For a broadcast, the time required is

当n = O(1)时为O(log p),否则为O(n log p).

which is O(log p) when n = O(1) and O(n log p) otherwise.

最后,对于MPI_Send()之类的点对点通信,如果要发送n个整数,则通信复杂度为O(n).当n = O(1)时,复杂度显然是O(1).

Finally, for point-to-point communications like MPI_Send(), if you are sending n integers the communication complexity is O(n). When n = O(1) then the complexity is obviously O(1).

这篇关于MPI通信复杂度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆