MPI 将矩阵划分为块 [英] MPI partition matrix into blocks

查看:52
本文介绍了MPI 将矩阵划分为块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将矩阵划分为块(而不是条带),然后使用 MPI_Scatter 分配这些块.

I want to partition matrix into blocks (not stripes) and then distribute this blocks using MPI_Scatter.

我想出了可行的解决方案,但我认为这远非最佳实践".我有 8x8 矩阵,填充了从 0 到 63 的数字.然后我将它分成 4 个 4x4 块,使用 MPI_Type_vector 并通过 MPI_Send 分发它,但这需要一些额外的计算,因为我必须计算大矩阵中每个块的偏移量.

I came up with solution which works, but I think it is far from "best practice". I have 8x8 matrix, filled with numbers from 0 to 63. Then I divide it into 4 4x4 blocks, using MPI_Type_vector and distribute it via MPI_Send, but this require some extra computation since i have to compute offsets for each block in big matrix.

如果我使用分散,第一个(左上角)块传输正常,但其他块不传输(块起始偏移错误).

If I use scatter, first (top left) block is transfered OK, but other blocks are not (wrong offset for start of block).

那么是否可以使用 MPI_Scatter 传输矩阵块,或者进行所需分解的最佳方法是什么?

So is it possible to transfer blocks of matrix using MPI_Scatter, or what is the best way to do desired decomposition?

这是我的代码:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

#define SIZE 8


int main(void) {

        MPI_Init(NULL, NULL);
        int p, rank;
        MPI_Comm_size(MPI_COMM_WORLD, &p);
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);
        char i;

        char a[SIZE*SIZE];
        char b[(SIZE/2)*(SIZE/2)];

        MPI_Datatype columntype;
        MPI_Datatype columntype2;

        MPI_Type_vector(4, 4, SIZE, MPI_CHAR, &columntype2);
        MPI_Type_create_resized( columntype2, 0, sizeof(MPI_CHAR), &columntype );
        MPI_Type_commit(&columntype);

        if(rank == 0) {
                for( i = 0; i < SIZE*SIZE; i++) {
                        a[i] = i;
                }

                for(int rec=0; rec < p; rec++) {
                        int offset = (rec%2)*4 + (rec/2)*32;
                      MPI_Send (a+offset, 1, columntype, rec, 0, MPI_COMM_WORLD);
                }
        }
        MPI_Recv (b, 16, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
        //MPI_Scatter(&a, 1, boki, &b, 16, MPI_CHAR , 0, MPI_COMM_WORLD);

        printf("rank= %d  b= 
%d %d %d %d
%d %d %d %d
%d %d %d %d
%d %d %d %d
", rank, b[0], b[1], b[2], b[3], b[4], b[5], b[6], b[7], b[8], b[9], b[10], b[11], b[12], b[13], b[14], b[15]);

        MPI_Finalize();

        return 0;
}

推荐答案

您所获得的几乎是最佳实践";在你习惯之前它只是有点混乱.

What you've got is pretty much "best practice"; it's just a bit confusing until you get used to it.

不过有两点:

首先,要小心:sizeof(MPI_CHAR) 是,我假设,4 个字节,而不是 1.MPI_CHAR 是一个(整数)常量,它描述(到MPI 库)一个字符.您可能需要 sizeof(char)SIZE/2*sizeof(char) 或任何其他方便的东西.但是调整大小的基本思想是正确的.

First, be careful with this: sizeof(MPI_CHAR) is, I assume, 4 bytes, not 1. MPI_CHAR is an (integer) constant that describes (to the MPI library) a character. You probably want sizeof(char), or SIZE/2*sizeof(char), or anything else convenient. But the basic idea of doing a resize is right.

其次,我认为您在使用 MPI_Scatterv 时被卡住了,因为没有简单的方法可以使每个块之间的偏移量具有相同的大小.也就是说,第一个块中的第一个元素位于 a[0],第二个元素位于 a[SIZE/2](size/2 的跳转),接下来是 a[SIZE*(SIZE/2)]((SIZE-1)*(SIZE/2) 的跳转).所以你需要能够手动生成偏移量.

Second, I think you're stuck using MPI_Scatterv, though, because there's no easy way to make the offset between each block the same size. That is, the first element in the first block is at a[0], the second is at a[SIZE/2] (jump of size/2), the next is at a[SIZE*(SIZE/2)] (jump of (SIZE-1)*(SIZE/2)). So you need to be able to manually generate the offsets.

以下似乎对我有用(当大小"表示行数"与列数"等时,我对其进行了概括以使其更清晰):

The following seems to work for me (I generalized it a little bit to make it clearer when "size" means "number of rows" vs "number of columns", etc):

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

#define COLS  12
#define ROWS  8

int main(int argc, char **argv) {

    MPI_Init(&argc, &argv);
    int p, rank;
    MPI_Comm_size(MPI_COMM_WORLD, &p);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    char i;

    char a[ROWS*COLS];
    const int NPROWS=2;  /* number of rows in _decomposition_ */
    const int NPCOLS=3;  /* number of cols in _decomposition_ */
    const int BLOCKROWS = ROWS/NPROWS;  /* number of rows in _block_ */
    const int BLOCKCOLS = COLS/NPCOLS; /* number of cols in _block_ */

    if (rank == 0) {
        for (int ii=0; ii<ROWS*COLS; ii++) {
            a[ii] = (char)ii;
        }
    }

    if (p != NPROWS*NPCOLS) {
        fprintf(stderr,"Error: number of PEs %d != %d x %d
", p, NPROWS, NPCOLS);
        MPI_Finalize();
        exit(-1);
    }
    char b[BLOCKROWS*BLOCKCOLS];
    for (int ii=0; ii<BLOCKROWS*BLOCKCOLS; ii++) b[ii] = 0;

    MPI_Datatype blocktype;
    MPI_Datatype blocktype2;

    MPI_Type_vector(BLOCKROWS, BLOCKCOLS, COLS, MPI_CHAR, &blocktype2);
    MPI_Type_create_resized( blocktype2, 0, sizeof(char), &blocktype);
    MPI_Type_commit(&blocktype);

    int disps[NPROWS*NPCOLS];
    int counts[NPROWS*NPCOLS];
    for (int ii=0; ii<NPROWS; ii++) {
        for (int jj=0; jj<NPCOLS; jj++) {
            disps[ii*NPCOLS+jj] = ii*COLS*BLOCKROWS+jj*BLOCKCOLS;
            counts [ii*NPCOLS+jj] = 1;
        }
    }

    MPI_Scatterv(a, counts, disps, blocktype, b, BLOCKROWS*BLOCKCOLS, MPI_CHAR, 0, MPI_COMM_WORLD);
    /* each proc prints it's "b" out, in order */
    for (int proc=0; proc<p; proc++) {
        if (proc == rank) {
            printf("Rank = %d
", rank);
            if (rank == 0) {
                printf("Global matrix: 
");
                for (int ii=0; ii<ROWS; ii++) {
                    for (int jj=0; jj<COLS; jj++) {
                        printf("%3d ",(int)a[ii*COLS+jj]);
                    }
                    printf("
");
                }
            }
            printf("Local Matrix:
");
            for (int ii=0; ii<BLOCKROWS; ii++) {
                for (int jj=0; jj<BLOCKCOLS; jj++) {
                    printf("%3d ",(int)b[ii*BLOCKCOLS+jj]);
                }
                printf("
");
            }
            printf("
");
        }
        MPI_Barrier(MPI_COMM_WORLD);
    }

    MPI_Finalize();

    return 0;
}

运行:

$ mpirun -np 6 ./matrix

Rank = 0
Global matrix: 
  0   1   2   3   4   5   6   7   8   9  10  11 
 12  13  14  15  16  17  18  19  20  21  22  23 
 24  25  26  27  28  29  30  31  32  33  34  35 
 36  37  38  39  40  41  42  43  44  45  46  47 
 48  49  50  51  52  53  54  55  56  57  58  59 
 60  61  62  63  64  65  66  67  68  69  70  71 
 72  73  74  75  76  77  78  79  80  81  82  83 
 84  85  86  87  88  89  90  91  92  93  94  95 
Local Matrix:
  0   1   2   3 
 12  13  14  15 
 24  25  26  27 
 36  37  38  39 

Rank = 1
Local Matrix:
  4   5   6   7 
 16  17  18  19 
 28  29  30  31 
 40  41  42  43 

Rank = 2
Local Matrix:
  8   9  10  11 
 20  21  22  23 
 32  33  34  35 
 44  45  46  47 

Rank = 3
Local Matrix:
 48  49  50  51 
 60  61  62  63 
 72  73  74  75 
 84  85  86  87 

Rank = 4
Local Matrix:
 52  53  54  55 
 64  65  66  67 
 76  77  78  79 
 88  89  90  91 

Rank = 5
Local Matrix:
 56  57  58  59 
 68  69  70  71 
 80  81  82  83 
 92  93  94  95 

这篇关于MPI 将矩阵划分为块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆