MPI_IN_PLACE如何与MPI_Scatter一起使用? [英] How does MPI_IN_PLACE work with MPI_Scatter?

查看:202
本文介绍了MPI_IN_PLACE如何与MPI_Scatter一起使用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

MPI_IN_PLACE作为MPI_Scatter的参数给出时,它到底能做什么?应如何使​​用?我无法理解man MPI_Scatter:

What exactly does MPI_IN_PLACE do when given as an argument to MPI_Scatter and how should it be used? I can't make sense of man MPI_Scatter:

当通信器是内部通信器时,可以就地执行收集操作(输出缓冲区用作输入缓冲区).使用变量MPI_IN_PLACE作为根进程recvbuf的值.在这种情况下,recvcount和recvtype将被忽略,并且根进程不会向其自身发送任何数据.因为in-place选项将接收缓冲区转换为发送和接收缓冲区,所以包含INTENT的Fortran绑定必须将它们标记为INOUT,而不是OUT.

When the communicator is an intracommunicator, you can perform a gather operation in-place (the output buffer is used as the input buffer). Use the variable MPI_IN_PLACE as the value of the root process recvbuf. In this case, recvcount and recvtype are ignored, and the root process sends no data to itself. Because the in-place option converts the receive buffer into a send-and-receive buffer, a Fortran binding that includes INTENT must mark these as INOUT, not OUT.

我想要做的是使用相同的缓冲区,该缓冲区包含根目录上的数据,并且在每个其他进程中都使用该缓冲区作为接收缓冲区(例如MPI_Bcast中). MPI_ScatterMPI_IN_PLACE可以让我这样做吗?

What I want to do is use the same buffer that contains the data on the root as the receive buffer at each other process (like in MPI_Bcast). Will MPI_Scatter with MPI_IN_PLACE let me do this?

推荐答案

根据

sendbuf- 发送缓冲区的地址(选择,仅在根目录有效)

sendbuf -- address of send buffer (choice, significant only at root)

来自讨论,

对于散点图/散点图,应将MPI_IN_PLACE作为recvbuf传递.对于聚集和大多数其他集合,应将MPI_IN_PLACE作为sendbuf传递.

For scatter/scatterv, MPI_IN_PLACE should be passed as the recvbuf. For gather and most other collectives, MPI_IN_PLACE should passed as the sendbuf.

因此,您需要在根进程的recv缓冲区位置中使用MPI_IN_PLACE,例如

You therefore need to use MPI_IN_PLACE in the recv buffer location on the root process, e.g.

if (rank == iroot)
     MPI_Scatter(buf, sendcount, MPI_Datatype, MPI_IN_PLACE, sendcount,  
                 MPI_Datatype, iroot, MPI_COMM_WORLD);
else
     MPI_Scatter(dummy, sendcount, MPI_Datatype, buf, sendcount, MPI_Datatype,  
                 iroot, MPI_COMM_WORLD);

然后可以在根目录上的发送中使用buf,并在彼此进程的recv位置中使用相同的buf.接收处理器上的dummy缓冲区也可能被MPI_IN_PLACE代替.

You could then use buf in the send on root and the same buf in the recv position on each other process. The dummy buffer on the receiving processors could probably also be replaced by MPI_IN_PLACE.

这篇关于MPI_IN_PLACE如何与MPI_Scatter一起使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆