如何MPI_Scatter和MPI_Gather从C使用吗? [英] How are MPI_Scatter and MPI_Gather used from C?

查看:578
本文介绍了如何MPI_Scatter和MPI_Gather从C使用吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

到目前为止,我的应用程序在整数列表一个txt文件中读取。这些整数需要被存储在由主进程即具有秩0。即正在工作的处理器阵列。现在,当我运行程序我有一个if语句检查它是否是主进程,如果它是我执行命令MPI_Scatter。据我了解,这将细分与数字数组并把它传递到从动处理,即等级> 0。但是我不知道如何处理MPI_Scatter我的意思是如何从过程得到的信息子阵?我怎么能告诉非主流程做子阵列的东西吗?

有人可以提供一个简单的例子来告诉我怎样主进程从阵列发出的元素,然后让奴隶加入总和,这回主人,它增加了所有的款项一起打印出来? - 只为了假人例如这样我就可以用它来了解执行

我的code迄今:

 的#include<&stdio.h中GT;
#包括LT&;&mpi.h GT;//一个指针文件读入。
FILE * FR;INT主(INT ARGC,CHAR *的argv []){INT等级,尺寸,N,number_read;
焦线[80];
INT数字[30];
INT缓冲器[30];MPI_INIT(安培; ARGC,&安培; argv的);
MPI_Comm_rank(MPI_COMM_WORLD,&安培;等级);
MPI_Comm_size(MPI_COMM_WORLD,&安培;大小);FR =的fopen(int_data.txt,室温); //我们打开要读取的文件。如果(排名== 0){
的printf(我的等级=%d个\\ N级);//读取在整数,并将其存储在int类型的阵列数字的平面文件。
N = 0;
而(与fgets(行,80,FR)!= NULL){
  sscanf的(行,%D,&安培; number_read);
  号码[N] = number_read;
  的printf(我没有处理器%d个 - >对于单元%d,我们有号:%d个\\ N,军衔,正,数字[N]);
  Ñ​​++;
}FCLOSE(FR);MPI_Scatter(安培;数字2,MPI_INT,&安培;缓冲,2,MPI_INT,职级,MPI_COMM_WORLD);}
其他{
MPI_Gather(安培;缓冲液,2,MPI_INT,&放大器;数字,2,MPI_INT,0,MPI_COMM_WORLD);
的printf(%D,缓冲器[0]);
}
MPI_Finalize();
返回0;
}


解决方案

这是业务遍及MPI如何工作与人的新给它的常见的误区;特别是集体行动,人们尝试启动使用广播( MPI_Bcast )刚刚从0级,期望调用以某种方式推数据到其他处理器。但是,这并不是真正的MPI例程是如何工作的;最MPI通讯需要同时发送和接收,使MPI调用

在特别 MPI_Scatter() MPI_Gather()(和 MPI_Bcast ,和许多其他人)是的集体的操作;他们通过在通信任务的所有的调用。在通信器的所有处理器进行相同的呼叫,并且执行该操作。 (这就是为什么散射和聚集都需要作为参数之一的根的过程,所有的数据去/来自)。通过做这种方式的MPI实现有很大的范围,优化沟通模式。

所以这里有一个简单的例子(更新以包括收集):

 的#include< mpi.h>
#包括LT&;&stdio.h中GT;
#包括LT&;&stdlib.h中GT;INT主(INT ARGC,字符** argv的){
    INT大小,职级;    MPI_INIT(安培; ARGC,&安培; argv的);
    MPI_Comm_size(MPI_COMM_WORLD,&安培;大小);
    MPI_Comm_rank(MPI_COMM_WORLD,&安培;等级);    为int *全球国际= NULL;
    INT localdata;    如果(排名== 0){
        全球国际=的malloc(大小*的sizeof(INT));
        的for(int i = 0; I<大小;我++)
            全球国际[I] = 2 * I + 1;        的printf(处理器%d的数据:军衔);
        的for(int i = 0; I<大小;我++)
            的printf(%d个,全球国际[I]);
        的printf(\\ n);
    }    MPI_Scatter(全球国际,1,MPI_INT,&放大器; localdata,1,MPI_INT,0,MPI_COMM_WORLD);    的printf(处理器%d的数据%d个\\ N,军衔,localdata);
    localdata * = 2;
    的printf(处理器%d个数据增加了一倍,现在有%d \\ n,军衔,localdata);    MPI_Gather(安培; localdata,1,MPI_INT,全球国际,1,MPI_INT,0,MPI_COMM_WORLD);    如果(排名== 0){
        的printf(处理器%d的数据:军衔);
        的for(int i = 0; I<大小;我++)
            的printf(%d个,全球国际[I]);
        的printf(\\ n);
    }    如果(排名== 0)
        免费(全球国际);    MPI_Finalize();
    返回0;
}

运行它给了:

  GPC-f103n084- $ mpicc -o分散 - 集中分散gather.c -std = C99
GPC-f103n084- $的mpirun -np 4 ./scatter-gather
处理器0有数据:1 3 5 7
处理器0有数据1
处理器0的数据增加了一倍,目前已拥有2
3处理器有数据7
3处理器的数据增加了一倍,目前已拥有14
处理器2有数据5
处理器2的数据增加了一倍,现在有10
处理器1有数据3
处理器1的数据增加了一倍,目前已拥有6
处理器0有数据:2 6 10 14

So far, my application is reading in a txt file with a list of integers. These integers needs to be stored in an array by the master process i.e. processor with rank 0. That is working. Now When I run the program I have an if statement checking whether it is the master process if it is I'm executing the MPI_Scatter command. From what I understand this will subdivide the array with the numbers and pass it out to the slave processes i.e. rank > 0 . However I'm not sure how to handle the MPI_Scatter I mean how does the slave process get the information sub-array? How can I tell the non-master processes to do something with the sub-array?

Can someone please provide a simple example to show me how the master process sends out elements from the array and then have the slaves add the sum and return this to the master, which adds all the sums together and prints it out? -- Just for dummy example so I can use it to understand the implementation

My code so far:

#include <stdio.h>
#include <mpi.h>

//A pointer to the file to read in.
FILE *fr;

int main(int argc, char *argv[]) {

int rank,size,n,number_read;
char line[80];
int numbers[30];
int buffer[30];

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

fr = fopen ("int_data.txt","rt"); //We open the file to be read.

if(rank ==0){
printf("my rank = %d\n",rank);

//Reads in the flat file of integers  and stores it in the array 'numbers' of type int.
n=0;
while(fgets(line,80,fr) != NULL) {
  sscanf(line, "%d", &number_read);
  numbers[n] = number_read;
  printf("I am processor no. %d --> At element %d we have number: %d\n",rank,n,numbers[n]);
  n++;
}

fclose(fr);

MPI_Scatter(&numbers,2,MPI_INT,&buffer,2,MPI_INT,rank,MPI_COMM_WORLD);

}
else {
MPI_Gather ( &buffer, 2, MPI_INT, &numbers, 2, MPI_INT, 0, MPI_COMM_WORLD); 
printf("%d",buffer[0]);
}
MPI_Finalize();
return 0;
}

解决方案

This is a common misunderstanding of how operations work in MPI with people new to it; particularly with collective operations, where people try to start using broadcast (MPI_Bcast) just from rank 0, expecting the call to somehow "push" the data to the other processors. But that's not really how MPI routines work; most MPI communication requires both the sender and the receiver to make MPI calls.

In particular, MPI_Scatter() and MPI_Gather() (and MPI_Bcast, and many others) are collective operations; they have to be called by all of the tasks in the communicator. All processors in the communicator make the same call, and the operation is performed. (That's why scatter and gather both require as one of the parameters the "root" process, where all the data goes to / comes from). By doing it this way, the MPI implementation has a lot of scope to optimize the communication patterns.

So here's a simple example (Updated to include gather):

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    int size, rank;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    int *globaldata=NULL;
    int localdata;

    if (rank == 0) {
        globaldata = malloc(size * sizeof(int) );
        for (int i=0; i<size; i++)
            globaldata[i] = 2*i+1;

        printf("Processor %d has data: ", rank);
        for (int i=0; i<size; i++)
            printf("%d ", globaldata[i]);
        printf("\n");
    }

    MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);

    printf("Processor %d has data %d\n", rank, localdata);
    localdata *= 2;
    printf("Processor %d doubling the data, now has %d\n", rank, localdata);

    MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);

    if (rank == 0) {
        printf("Processor %d has data: ", rank);
        for (int i=0; i<size; i++)
            printf("%d ", globaldata[i]);
        printf("\n");
    }

    if (rank == 0)
        free(globaldata);

    MPI_Finalize();
    return 0;
}

Running it gives:

gpc-f103n084-$ mpicc -o scatter-gather scatter-gather.c -std=c99
gpc-f103n084-$ mpirun -np 4 ./scatter-gather
Processor 0 has data: 1 3 5 7 
Processor 0 has data 1
Processor 0 doubling the data, now has 2
Processor 3 has data 7
Processor 3 doubling the data, now has 14
Processor 2 has data 5
Processor 2 doubling the data, now has 10
Processor 1 has data 3
Processor 1 doubling the data, now has 6
Processor 0 has data: 2 6 10 14

这篇关于如何MPI_Scatter和MPI_Gather从C使用吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆