MPI_Send 不适用于更高的缓冲区大小? [英] MPI_Send does not work with higher buffer size?
问题描述
当 MPI_Send 缓冲区大小为 100 时程序可以运行,但是当它为 1000 或更大时它会卡住.为什么?
When MPI_Send buffer size is 100 program works, but it stucks when it is 1000 or greater. Why?
if(id == 0){
rgb_image = stbi_load(argv[1], &width, &height, &bpp, CHANNEL_NUM);
for(int i = 0; i < size -1; i++)
MPI_Send(rgb_image,1000,MPI_UINT8_T,i,0,MPI_COMM_WORLD);
}
uint8_t *part = (uint8_t*) malloc(sizeof(uint8_t)*(1000));
if(id != size-1 && size > 1)
MPI_Recv(part,1000,MPI_UINT8_T,0,0,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
推荐答案
此程序无效 w.r.t.MPI 标准,因为没有匹配的接收(排名 0)
This program is not valid w.r.t. MPI Standard since there is no matching receive (on rank 0) for
MPI_Send(..., dest=0, ...)
MPI_Send()
允许阻塞,直到发布匹配的接收(并且通常发生在消息大"时)......并且所需的匹配接收永远不会被发布.
MPI_Send()
is allowed to block until a matching receive is posted (and that generally happens when the message is "large") ... and the required matching receive never gets posted.
一个典型的解决方法是在 MPI_Send()
(和 MPI_Send()
之前在 rank 0 上发出一个 MPI_Irecv(...,src = 0,...)
code>MPI_Wait() 之后),或者处理 0 ->0 与 MPI_Sendrecv()
通信.
A typical fix would be to issue a MPI_Irecv(...,src = 0,...)
on rank 0 before the MPI_Send()
(and MPI_Wait()
after), or to handle 0 -> 0 communication with MPI_Sendrecv()
.
话虽如此,创建一个通信器可能会更有效,将所有等级减去最后一个,以及此通信器中的 MPI_Bcast()
.
That being said, it would likely more efficient to create a communicator will all the ranks minus the last one, and MPI_Bcast()
in this communicator.
这篇关于MPI_Send 不适用于更高的缓冲区大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!