如何通过二维数组中的MPI和用C语言创建一个动态的变量值? [英] How to pass 2D array in MPI and create a dynamic tag value using C language?

查看:275
本文介绍了如何通过二维数组中的MPI和用C语言创建一个动态的变量值?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新来的MPI编程。我有一个8×10阵列,我需要使用平行发现每行的总和。在等级0(处理1),它会产生使用2维阵列的8个由10矩阵。然后,我会使用标记号作为数组的第一个索引值(行​​号)。这样一来,我可以使用一个独特的缓冲通过Isend发送。不过,它看起来像我的Isend标签数生成的方法是行不通的。你能请看看下面code和告诉我,如果我是正确标签号码传递二维数组。当我运行这个code,它只是执行rannk 1,并等待后停止。我用3个过程在这个例子中,使用命令的mpirun -np 3测试请让我知道如何与一个例子,如果有可能解决这个问题。

 的#includempi.h
#包括LT&;&stdio.h中GT;
#包括LT&;&stdlib.h中GT;INT主(INT ARGC,CHAR *的argv [])
{
        MPI_INIT(安培; ARGC,&安培; argv的);
        INT world_rank;
        MPI_Comm_rank(MPI_COMM_WORLD,&安培; world_rank);
        INT world_size;
        MPI_Comm_size(MPI_COMM_WORLD,&安培; world_size);
        INT标记= 1;
        INT ARR [8] [10];
        MPI_Request请求;
        MPI_Status状态;
        INT源= 0;
        INT DEST;        的printf(\\ n - 目前排名:%d个\\ N,world_rank);        如果(world_rank == 0)
        {
            INT I = 0;
            诠释的a,b,X,Y;            的printf(*排名0 excecuting \\ n);            用于:(a = 0;一个与所述8 /(world_size-1);一个++)//如果-np是3,这将循环4次
            {
                对于(B = 0; B<(world_size-1); B ++)//如果-np是3,这个循环将循环2次
                {//所以,如果-np为3,由于这两个循环,Isend将被称为8倍
                    DEST = B + 1;
                    标签= A + B; //每一次创建一个潮头标签值,它可以作为数组的第一个索引值
                    //错误:此标记值传递给Isend似乎并不被workiing
                    MPI_Isend(安培; ARR [标签] [0],10,MPI_INT,DEST,吊牌,MPI_COMM_WORLD,&安培;要求);
                }
            }            为(X = 0; X&下; 8; X ++)//生成整个8×10的2D阵列
            {
                我++;
                为(γ= 0; Y小于10; Y +)
                {
                    ARR [X] [Y] =我;
                }
            }
        }
        其他
        {
            诠释A,B;
            为(B = 1; b将; = 8 /(world_size-1); B +)
            {
                INT总和= 0;
                INT I;
                MPI_Irecv(安培; ARR [标签] [0],10,MPI_INT,来源,标签,MPI_COMM_WORLD,&安培;要求);
                MPI_WAIT(安培;请求和放大器;状态);
                        //错误:没有得到正确的标记值
                对于(I = 0; I&小于10;我+ +)
                {
                    总和= ARR [标签] [I] +总和;
                }
                的printf(\\ nSum为:%d个在排名:%D和标签数:%d \\ n,总之,world_rank,标签);
            }
        }
        MPI_Finalize();
}


解决方案

标签的问题是因为怎样的标签计算(或没有)在不同的进程。你初始化的变量值的所有流程为

  INT标记= 1;

和之后,过程等级0您设置的标签

 标记= A + B;

其中,首次此项设置,将设置标记 0,因为这两个 A b 开始时为零。然而,对于具有高于0级方法中,标签被从未改变。他们将继续有标记设置为1。

标签唯一标识 MPI_Isend MPI_Irecv ,这意味着发送和其对应的被发送的消息收到必须具有数据传输成功相同的标记。由于标签进程之间不匹配的大部分收到​​的,转让大多是不成功的。这会导致排名高于0,最终阻塞(等待)永远的呼叫流程 MPI_WAIT

为了解决这个问题,你必须确保秩改变高于零的进程的标记。不过,在我们能做到这一点,有几个值得在触摸了其他问题。

通过您设置的标签秩0的过程,现在的样子,标记永远只能有值0〜4(假设3进程)。这是因为 A 被限制在范围0〜3,和 B 只能有值0或1。这些值的最大可能的总和为4。这意味着,当你访问使用阵列改编[标签] [0] ,你会错过很多的数据,你会多次重复发送相同的行。我建议改变你进场发送每个子阵列,这样你只有一个for循环(您目前正在使用标记接入),以确定哪些子阵列发送的方式,而不是两个嵌入式循环。然后,您可以计算过程的阵列发送到为

  DEST = subarray_index%(world_size  -  1)+ 1;

此将交替过程之间的desitnations秩大于零。你可以保持标签作为刚 subarray_index 。在接收端,你需要计算每个进程的标签,每收到。

最后,我看到了你被初始化数组您发送的数据之后。要做到这一点事先。

结合所有这些方面,我们得到

 的#includempi.h
#包括LT&;&stdio.h中GT;
#包括LT&;&stdlib.h中GT;INT主(INT ARGC,CHAR *的argv [])
{
        MPI_INIT(安培; ARGC,&安培; argv的);
        INT world_rank;
        MPI_Comm_rank(MPI_COMM_WORLD,&安培; world_rank);
        INT world_size;
        MPI_Comm_size(MPI_COMM_WORLD,&安培; world_size);
        INT标记= 1;
        INT ARR [8] [10];
        MPI_Request请求;
        MPI_Status状态;
        INT源= 0;
        INT DEST;        的printf(\\ n - 目前排名:%d个\\ N,world_rank);        如果(world_rank == 0)
        {
            INT I = 0;
            诠释的a,b,X,Y;            的printf(*排名0 excecuting \\ n);
            //我搬到阵列一代的发送之前。
            为(X = 0; X&下; 8; X ++)//生成整个8×10的2D阵列
            {
                我++;
                为(γ= 0; Y小于10; Y +)
                {
                    ARR [X] [Y] =我;
                }
            }            //我加了subarray_index如上所述。
            INT subarray_index;
            为(subarray_index = 0; subarray_index&下; 8; subarray_index ++)
            {
                DEST = subarray_index%(world_size - 1)+ 1;
                标签= subarray_index;
                MPI_Isend(安培;常用3 [subarray_index] [0],10,MPI_INT,DEST,吊牌,MPI_COMM_WORLD,&安培;要求);
            }        }
        其他
        {
            诠释A,B;
            为(B = 0; b将8 /(world_size-1); B +)
            {
                INT总和= 0;
                INT I;
                //我们要在这里做一些额外的计算。这些匹配的标签,DEST和子阵。
                INT my_offset = world_rank-1;
                标记= B *(world_size-1)+ my_offset;
                INT子阵= B;
                MPI_Irecv(安培;常用3 [子阵] [0],10,MPI_INT,来源,标签,MPI_COMM_WORLD,&安培;要求);
                MPI_WAIT(安培;请求和放大器;状态);
                对于(I = 0; I&小于10;我+ +)
                {
                    总和= ARR [子阵] [I] +总和;
                }
                的printf(\\ nSum为:%d个在排名:%D和标签数:%d \\ n,总之,world_rank,标签);
            }
        }
        MPI_Finalize();
}

有一个一件事,似乎仍然在这个版本有点未完成的,你要考虑:会发生什么,如果你的流程的变化是多少?例如,如果你有4个过程,而不是3,它看起来像你可能会遇到一些麻烦循环

 为(B = 0; B< 8 /(world_size-1); B ++)

,因为每个进程将执行它的相同的次数,但数据的发送不干净分割为3工人(非秩零过程)。

的量

不过,如果这不是你一个问题,那么你就需要处理这样的情况。

I am new to MPI programing. I have a 8 by 10 array that I need to use to find the summation of each row parallely. In rank 0 (process 0), it will generate the 8 by 10 matrix using a 2 dimensional array. Then I would use tag number as the first index value(row number) of the array. This way, I can use a unique buffer to send through Isend. However, it looks like my method of tag number generation for Isend is not working. Can you please look in to the following code and tell me if I am passing the 2D array correctly and tag number. When I run this code, it stop just after executing rannk 1 and waits. I use 3 process for this example and use the command mpirun -np 3 test please let me know how to tackle this problem with an example if possible.

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>

int main (int argc, char *argv[])
{
        MPI_Init(&argc, &argv);
        int world_rank;
        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
        int world_size;
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);        
        int tag = 1;        
        int arr[8][10]; 
        MPI_Request request;
        MPI_Status status;
        int source = 0;
        int dest;

        printf ("\n--Current Rank: %d\n", world_rank);

        if (world_rank == 0)
        {
            int i = 0;
            int a, b, x, y;

            printf("* Rank 0 excecuting\n");

            for(a=0; a<8/(world_size-1); a++)//if -np is 3, this will loop 4 times
            {           
                for(b=0; b<(world_size-1); b++)//if -np is 3, this loops will loop 2 times
                {//So, if -np is 3, due to both of these loops, Isend will be called 8 times
                    dest = b+1;     
                    tag = a+b;//create a uniqe tag value each time, which can be use as first index value of array
                    //Error: This tag value passing to Isend doesn't seems to be workiing
                    MPI_Isend(&arr[tag][0], 10, MPI_INT, dest, tag, MPI_COMM_WORLD, &request);  
                }
            }

            for(x=0; x<8; x++)//Generating the whole 8 by 10 2D array
            {   
                i++;
                for ( y = 0; y < 10; y++ )
                {
                    arr[x][y] = i; 
                }   
            }               
        }
        else 
        {
            int a, b;                   
            for(b=1; b<=8/(world_size-1); b++)
            {
                int sum = 0;
                int i;
                MPI_Irecv(&arr[tag][0], 10, MPI_INT, source, tag, MPI_COMM_WORLD, &request);
                MPI_Wait (&request, &status);               
                        //Error: not getting the correct tag value
                for(i = 0; i<10; i++)
                {   
                    sum = arr[tag][i]+sum;
                }
                printf("\nSum is: %d at rank: %d and tag is:%d\n", sum, world_rank, tag);
            }           
        }
        MPI_Finalize();
}

解决方案

The tag issue is because of how the tag is computed (or not) on different processes. You're initializing the tag values for all processes as

int tag = 1; 

and later, for process rank 0 you set the tag to

tag = a+b;

which, for the first time this is set, will set tag to 0 because both a and b start out as zero. However, for processes with rank above 0, the tag is never changed. They will continue to have the tag set to 1.

The tag uniquely identifies the message being sent by MPI_Isend and MPI_Irecv, which means that a send and its corresponding receive must have the same tag for the data transfer to succeed. Because the tags are mismatched between processes for most of the receives, the transfers are mostly unsuccessful. This causes processes with rank higher than 0 to eventually block (wait) forever on the call to MPI_Wait.

In order to fix this, you have to make sure to change the tags for the processes with rank above zero. However, before we can do that, there's a few other issues worth touching up on.

With the way you've set your tag for the rank 0 process right now, tag can only ever have values 0 to 4 (assuming 3 processes). This is because a is limited to the range 0 to 3, and b can only have values 0 or 1. The maximum possible sum of these values is 4. This means that when you access your array using arr[tag][0], you will miss out on a lot of the data, and you'll re-send the same rows several times. I recommend changing the way you approach sending each subarray (which you're currently accessing with tag) so that you have only one for loop to determine which subarray to send, rather than two embedded loops. Then, you can calculate the process to send the array to as

dest = subarray_index%(world_size - 1) + 1;

This will alternate the desitnations between the processes with rank greater than zero. You can keep the tag as just subarray_index. On the receiving side you'll need to calculate the tag per process, per receive.

Finally, I saw that you were initializing your array after you sent the data. You want to do that beforehand.

Combining all these aspects, we get

#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>

int main (int argc, char *argv[])
{
        MPI_Init(&argc, &argv);
        int world_rank;
        MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
        int world_size;
        MPI_Comm_size(MPI_COMM_WORLD, &world_size);        
        int tag = 1;        
        int arr[8][10]; 
        MPI_Request request;
        MPI_Status status;
        int source = 0;
        int dest;

        printf ("\n--Current Rank: %d\n", world_rank);

        if (world_rank == 0)
        {
            int i = 0;
            int a, b, x, y;

            printf("* Rank 0 excecuting\n");
            //I've moved the array generation to before the sends.
            for(x=0; x<8; x++)//Generating the whole 8 by 10 2D array
            {   
                i++;
                for ( y = 0; y < 10; y++ )
                {
                    arr[x][y] = i; 
                }   
            }

            //I added a subarray_index as mentioned above.
            int subarray_index;
            for(subarray_index=0; subarray_index < 8; subarray_index++)
            {
                dest = subarray_index%(world_size - 1) + 1;     
                tag = subarray_index;
                MPI_Isend(&arr[subarray_index][0], 10, MPI_INT, dest, tag, MPI_COMM_WORLD, &request);
            }

        }
        else 
        {
            int a, b;                   
            for(b=0; b<8/(world_size-1); b++)
            {
                int sum = 0;
                int i;
                //We have to do extra calculations here. These match tag, dest, and subarray.
                int my_offset = world_rank-1;
                tag = b*(world_size-1) + my_offset;
                int subarray = b;
                MPI_Irecv(&arr[subarray][0], 10, MPI_INT, source, tag, MPI_COMM_WORLD, &request);
                MPI_Wait (&request, &status);               
                for(i = 0; i<10; i++)
                {   
                    sum = arr[subarray][i]+sum;
                }
                printf("\nSum is: %d at rank: %d and tag is:%d\n", sum, world_rank, tag);
            }           
        }
        MPI_Finalize();
}

There's a one thing that still seems a bit unfinished in this version for you to consider: what will happen if your number of processes changes? For example, if you have 4 processes instead of 3, it looks like you may run into some trouble with the loop

for(b=0; b<8/(world_size-1); b++)

because each process will execute it the same number of times, but the amount of data sent doesn't cleanly split for 3 workers (non-rank-zero processes).

However, if that is not a concern to you, then you do not need to handle such cases.

这篇关于如何通过二维数组中的MPI和用C语言创建一个动态的变量值?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆