使用MPI_Bcast为MPI通信 [英] Using MPI_Bcast for MPI communication

查看:222
本文介绍了使用MPI_Bcast为MPI通信的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图从广播根节点的消息使用MPI_Bcast所有其他节点。但是,每当我运行这个程序,它始终挂在开头。有谁知道这有什么错呢?

 的#include< mpi.h>
#包括LT&;&stdio.h中GT;INT主(INT ARGC,字符** argv的){
        INT排名;
        诠释BUF;
        MPI_Status状态;
        MPI_INIT(安培; ARGC,&安培; argv的);
        MPI_Comm_rank(MPI_COMM_WORLD,&安培;等级);        如果(排名== 0){
                BUF = 777;
                MPI_Bcast(安培; buf中,1,MPI_INT,0,MPI_COMM_WORLD);
        }
        其他{
                MPI_RECV(安培; buf中,1,MPI_INT,0,0,MPI_COMM_WORLD,&放大器;状态);
                的printf(等级%d个接收收到%d个\\ N,军衔,BUF);
        }        MPI_Finalize();
        返回0;
}


解决方案

这是混乱的人们新的MPI一个共同的来源。不要使用<一个href=\"http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Recv.html\"><$c$c>MPI_Recv()以接收由广播发送的数据;您使用<一个href=\"http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Bcast.html\"><$c$c>MPI_Bcast().

例如,你想要的是这样的:

 的#include&LT; mpi.h&GT;
#包括LT&;&stdio.h中GT;INT主(INT ARGC,字符** argv的){
        INT排名;
        诠释BUF;
        const int的根= 0;        MPI_INIT(安培; ARGC,&安培; argv的);
        MPI_Comm_rank(MPI_COMM_WORLD,&安培;等级);        如果(排名==根){
           BUF = 777;
        }        的printf([%D]。:BCAST之前,BUF为%d \\ n,军衔,BUF);        / *大家都叫BCAST,数据从根部取出,并每个人的BUF结束了* /
        MPI_Bcast(安培; BUF,1,MPI_INT,根,MPI_COMM_WORLD);        的printf([%d个]:经过BCAST,BUF为%d \\ n,军衔,BUF);        MPI_Finalize();
        返回0;
}

有关MPI集体通信的每个人的具有能参与;每个人都有打电话给BCAST,或Allreduce,或者你有什么。 (这就是为什么常规BCAST有一个参数指定的根,或者是谁做的发送;如果只调用发送BCAST,你就不需要这个。)大家都叫广播,包括接收器;该receviers不只是发布接收。

这样做的理由是,集体行动可以包括在通信每个人,让你的国家,你希望发生的事情(每个人都会得处理的数据)是什么,而不是的如何的它发生(如:根处理器遍历所有其他的行列,并做了发送),以便有余地优化通信模式(例如,基于树的层次沟通的需要日志(P)对于p流程步骤,而不是 P 步)。

I'm trying to broadcast a message from the root node to all other nodes using MPI_Bcast. However, whenever I run this program it always hangs at the beginning. Does anybody know what's wrong with it?

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
        int rank;
        int buf;
        MPI_Status status;
        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        if(rank == 0) {
                buf = 777;
                MPI_Bcast(&buf, 1, MPI_INT, 0, MPI_COMM_WORLD);
        }
        else {
                MPI_Recv(&buf, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
                printf("rank %d receiving received %d\n", rank, buf);
        }

        MPI_Finalize();
        return 0;
}

解决方案

This is a common source of confusion for people new to MPI. You don't use MPI_Recv() to receive data sent by a broadcast; you use MPI_Bcast().

Eg, what you want is this:

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
        int rank;
        int buf;
        const int root=0;

        MPI_Init(&argc, &argv);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);

        if(rank == root) {
           buf = 777;
        }

        printf("[%d]: Before Bcast, buf is %d\n", rank, buf);

        /* everyone calls bcast, data is taken from root and ends up in everyone's buf */
        MPI_Bcast(&buf, 1, MPI_INT, root, MPI_COMM_WORLD);

        printf("[%d]: After Bcast, buf is %d\n", rank, buf);

        MPI_Finalize();
        return 0;
}

For MPI collective communications, everyone has to particpate; everyone has to call the Bcast, or the Allreduce, or what have you. (That's why the Bcast routine has a parameter that specifies the "root", or who is doing the sending; if only the sender called bcast, you wouldn't need this.) Everyone calls the broadcast, including the receivers; the receviers don't just post a receive.

The reason for this is that the collective operations can involve everyone in the communication, so that you state what you want to happen (everyone gets one processes' data) rather than how it happens (eg, root processor loops over all other ranks and does a send), so that there is scope for optimizing the communication patterns (eg, a tree-based hierarchical communication that takes log(P) steps rather than P steps for P processes).

这篇关于使用MPI_Bcast为MPI通信的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆