如何在循环中重用MPI_Scatter和MPI_Gather [英] How to reuse MPI_Scatter and MPI_Gather in a loop

查看:106
本文介绍了如何在循环中重用MPI_Scatter和MPI_Gather的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试学习如何多次使用MPI_ScatterMPI_Gather,并在等待这两个MPI函数完成后打印出结果.在程序顶部的进程0中,我想使用一个调用Scatter and Gather的while循环.完成所有计算后,我想将此数组发送回这些函数以进行更多计算.我已经在下面的代码中解释了我要做什么. /*.....*/中的注释是我要完成的任务.
以下代码使用4个处理器运行.
:$ mpicc test.c -o test
:$ mpirun -np 4 test

I am trying to learn how to use MPI_Scatter and MPI_Gather multiple times, and print out the result after waiting for these two MPI functions to complete. At process 0, top of the program, I want to use a while loop that call Scatter and Gather. Once they are done with all the calculation, I want to send this array back to these functions to do more calculations. I have explained in the code bellow what I am trying to do. Comments in /*.....*/ are the tasks I am trying to achieve.
Following code is run using 4 processors.
:$ mpicc test.c -o test
:$ mpirun -np 4 test

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    int size, rank;
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    int globaldata[8];
    int localdata[2];
    int counter, i;
    if (rank == 0) 
    {
        for (i=0; i<size*2; i++)//initializing array to all zeros, one time
            globaldata[i] = 0;

        /*counter=0;
        do
        {
            counter++;  */
            printf("Sending at Processor %d has data: ", rank);
            for (i=0; i<size*2; i++)
                printf("%d ", globaldata[i]);
            printf("\n");

            /*After MPI_Gather is done, I want to get the newly assined array here.
            Now the globaldata array should hold values: 0 0 1 1 2 2 3 3
            Therefore, in the next iteration of this while loop, these array values need 
            to be send for a new calculation with Scatter & Gather
        }while(counter<2);*/

        //Following need to be executed after all the scatter and gather has completed
        printf("Finally at Processor %d has data: ", rank);
        for (i=0; i<size*2; i++)//Here the result should be: 0 0 2 2 3 3 4 4
            printf("%d ", globaldata[i]);
        printf("\n");
    }


    MPI_Scatter(globaldata, 2, MPI_INT, &localdata, 2, MPI_INT, 0, MPI_COMM_WORLD); 

    localdata[0]= localdata[0]+rank;
    localdata[1]= localdata[1]+rank;

    MPI_Gather(&localdata, 2, MPI_INT, globaldata, 2, MPI_INT, 0, MPI_COMM_WORLD);

    if (rank == 0) {//Currently I can only see the newly assinged array values if I print out the result at the bottom
        printf("At the bottom, Processor %d has data: ", rank);
        for (i=0; i<size*2; i++)
            printf("%d ", globaldata[i]);
        printf("\n");
    }


    MPI_Finalize();
    return 0;
}

以上有关我要执行的操作的详细说明: 我想将我的globaldata数组发送到所有处理器.然后获取更新的globaldata数组.获得更新后的数组后,我想再次将此数组重新发送回所有其他进程以进行更多计算.我编写了以下代码,使用MPI_Send和MPI_Recv可以完成类似的工作.在这里,我使用MPI_Send将阵列发送到所有处理器.然后,此数组将对其进行更改,并发送回根目录/进程0.一旦获得修改后的数组,do while循环将再次运行并执行更多计算.我正在尝试做的是:以类似的方式使用MPI_Scatter和MPI_Gather.在哪里获取更新的globaldata数组,然后将其发送回MPI_Scatter和MPI_Gather以再次更改该数组

More explanation on what I am trying to do above: I am wanting to send my globaldata array to all processors. Then get an updated globaldata array. once I get the updated array, I want to resend this array back to all other process again to do more calculations. I have written the following code that does a similar job using MPI_Send and MPI_Recv. Here I am using MPI_Send to send my array to all processors. Then this array will change it and send back to the root/process 0. Once I get the modified array, do while loop will run again and perform more calculation. What I am trying to do is: use MPI_Scatter and MPI_Gather in a similar way. Where I get a updated globaldata array and send it back to MPI_Scatter and MPI_Gather to change that array again

#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    int size, rank;
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    int globaldata[8];
    int counter, i;
    if (rank == 0) 
    {
        for (i=0; i<size*2; i++)
            globaldata[i] = 0;

        counter=0;
        do
        {   /*becase of this do while loop "globaldata" array will always be updated and resend for more caculations*/
            counter++;  
            printf("Sending at Processor %d has data: ", rank);
            for (i=0; i<size*2; i++)
                printf("%d ", globaldata[i]);
            printf("\n");

            for(i = 0; i < 4; i++)
            {
                MPI_Send(&globaldata, 8, MPI_INT, i, 0, MPI_COMM_WORLD);
            }
            for(i = 1; i < 4; i++)
            {         
                MPI_Recv(&globaldata, 8, MPI_INT, i, 99, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            }

        }while(counter<2);

        /*Following executes after all the above calculations has completed*/
        printf("Finally at Processor %d has data: ", rank);
        for (i=0; i<size*2; i++)
            printf("%d ", globaldata[i]);
        printf("\n");
    }

    counter=0;
    do
    {
        counter++; 
        MPI_Recv(&globaldata, 8, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 
        globaldata[rank]=globaldata[rank]+rank;
        globaldata[rank+1]=globaldata[rank+1]+rank;
        MPI_Send(&globaldata, 8, MPI_INT, 0, 99, MPI_COMM_WORLD);
    }while(counter<2);

    MPI_Finalize();
    return 0;
}

推荐答案

只需将分散和收集以及本地处理放到循环内即可:

Just put the scatter and the gather together with the local processing inside the loop:

if (rank == 0)
{
   for (i = 0; i < size*2; i++)
      globaldata[i] = 0;
}

for (counter = 0; counter < 2; counter++)
{
   // if (rank == 0)
   // {
   //    pre-process globaldata
   // }

   MPI_Scatter(globaldata, 2, MPI_INT, localdata, 2, MPI_INT, 0, MPI_COMM_WORLD);

   localdata[0] += rank;
   localdata[1] += rank;

   MPI_Gather(localdata, 2, MPI_INT, globaldata, 2, MPI_INT, 0, MPI_COMM_WORLD);

   // if (rank == 0)
   // {
   //    post-process globaldata
   // }
}

if (rank == 0)
{
   printf("Finally at Processor %d has data: ", rank);
      for (i=0; i<size*2; i++)
         printf("%d ", globaldata[i]);
   printf("\n");
}

或者,如果您希望将主"流程的逻辑分开:

Or, if you prefer to keep the logic for the "master" process separate:

if (rank == 0)
{
   for (i = 0; i < size*2; i++)
      globaldata[i] = 0;

   for (counter = 0; counter < 2; counter++)
   {
      // pre-process globaldata

      MPI_Scatter(globaldata, 2, MPI_INT, localdata, 2, MPI_INT, 0, MPI_COMM_WORLD);

      // Not really useful as rank == 0 and it changes nothing
      localdata[0] += rank;
      localdata[1] += rank;

      MPI_Gather(localdata, 2, MPI_INT, globaldata, 2, MPI_INT, 0, MPI_COMM_WORLD);

      // post-process globaldata
   }

   printf("Finally at Processor %d has data: ", rank);
      for (i=0; i<size*2; i++)
         printf("%d ", globaldata[i]);
   printf("\n");
}
else
{
   for (counter = 0; counter < 2; counter++)
   {
      MPI_Scatter(globaldata /* or NULL */, 2, MPI_INT, localdata, 2, MPI_INT,
                  0, MPI_COMM_WORLD);

      localdata[0] += rank;
      localdata[1] += rank;

      MPI_Gather(localdata, 2, MPI_INT, globaldata /* or NULL */, 2, MPI_INT,
                 0, MPI_COMM_WORLD);
   }
}

确保代码两部分中的循环具有相同的迭代次数.另请注意,MPI_Scatter也会向根目录级别发送globaldata块,并且MPI_Gather从根目录收集数据块,因此主进程也应执行一些数据处理.

Make sure that the loops in both sections of the code have the same number of iterations. Also note that MPI_Scatter sends a chunk of globaldata to the root rank too and MPI_Gather collects a chunk of data from the root, therefore the master process is also expected to perform some data processing.

这篇关于如何在循环中重用MPI_Scatter和MPI_Gather的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆