我如何使用MPI_Scatterv发送一个矩阵的行所有的进程? [英] How can I send rows of a matrix to all the processes using MPI_Scatterv?

查看:1423
本文介绍了我如何使用MPI_Scatterv发送一个矩阵的行所有的进程?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我与MPI接口工作。我想分割矩阵(按行)和部分每个过程中分配。

I am working with the MPI interface. I want to split a matrix (by rows) and distribute the parts among every process.

例如,我有这样的7x7方阵米

For example, I have this 7x7 square matrix M.

M = [
    0.00    1.00    2.00    3.00    4.00    5.00    6.00    
    7.00    8.00    9.00    10.00   11.00   12.00   13.00
    14.00   15.00   16.00   17.00   18.00   19.00   20.00
    21.00   22.00   23.00   24.00   25.00   26.00   27.00
    28.00   29.00   30.00   31.00   32.00   33.00   34.00
    35.00   36.00   37.00   38.00   39.00   40.00   41.00
    42.00   43.00   44.00   45.00   46.00   47.00   48.00
];

我有3个过程,所以分割可以是:

I have 3 processes, so the split could be:


  • 进程0得到0行,1

  • 工艺1得到2行,3,4

  • 工艺2得到5行,第6

在Scatterv之后,它​​应该是这样的:

After the Scatterv, it should look like this:

Process 0:
M0 = [
    0.00    1.00    2.00    3.00    4.00    5.00    6.00    
    7.00    8.00    9.00    10.00   11.00   12.00   13.00
];

Process 1:
M1 = [
    14.00   15.00   16.00   17.00   18.00   19.00   20.00
    21.00   22.00   23.00   24.00   25.00   26.00   27.00
    28.00   29.00   30.00   31.00   32.00   33.00   34.00
];

Process 2:
M2 = [
    35.00   36.00   37.00   38.00   39.00   40.00   41.00
    42.00   43.00   44.00   45.00   46.00   47.00   48.00
];

我想我是够清楚什么,我想要实现。随意问,如果我没有解释好。

I think I was clear enough about what I want to achieve. Feel free to ask if I didn't explain well.

现在,我会告诉你我的code:

Now, I show you my code:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

#define BLOCK_LOW(id,p,n) ((id)*(n)/(p))
#define BLOCK_HIGH(id,p,n) ((id+1)*(n)/(p) - 1)
#define BLOCK_SIZE(id,p,n) ((id+1)*(n)/(p) - (id)*(n)/(p))
#define BLOCK_OWNER(index,p,n) (((p)*((index)+1)-1)/(n))

void **matrix_create(size_t m, size_t n, size_t size) {
   size_t i; 
   void **p= (void **) malloc(m*n*size+ m*sizeof(void *));
   char *c=  (char*) (p+m);
   for(i=0; i<m; ++i)
      p[i]= (void *) c+i*n*size;
   return p;
}

void matrix_print(double **M, size_t m, size_t n, char *name) {
    size_t i,j;
    printf("%s=[",name);
    for(i=0; i<m; ++i) {
        printf("\n  ");
        for(j=0; j<n; ++j)
            printf("%f  ",M[i][j]);
    }
    printf("\n];\n");
}

main(int argc, char *argv[]) {

    int npes, myrank, root = 0, n = 7, rows, i, j, *sendcounts, *displs;
    double **m, **mParts;

    MPI_Status status;
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD,&npes);
    MPI_Comm_rank(MPI_COMM_WORLD,&myrank);

    // Matrix M is generated in the root process (process 0)
    if (myrank == root) {
        m = (double**)matrix_create(n, n, sizeof(double));
        for (i = 0; i < n; ++i)
            for (j = 0; j < n; ++j)
                m[i][j] = (double)(n * i + j);
    }

    // Array containing the numbers of rows for each process
    sendcounts = malloc(n * sizeof(int));
    // Array containing the displacement for each data chunk
    displs = malloc(n * sizeof(int));
    // For each process ...
    for (j = 0; j < npes; j++) {
        // Sets each number of rows
        sendcounts[j] = BLOCK_SIZE(j, npes, n);
        // Sets each displacement
        displs[j] = BLOCK_LOW(j, npes, n);
    }
    // Each process gets the number of rows that he is going to get
    rows = sendcounts[myrank];
    // Creates the empty matrixes for the parts of M
    mParts = (double**)matrix_create(rows, n, sizeof(double));
    // Scatters the matrix parts through all the processes
    MPI_Scatterv(m, sendcounts, displs, MPI_DOUBLE, mParts, rows, MPI_DOUBLE, root, MPI_COMM_WORLD);

    // This is where I get the Segmentation Fault
    if (myrank == 1) matrix_print(mParts, rows, n, "mParts");

    MPI_Finalize();
}

我得到一个分段错误,当我尝试读取分散的数据,表明散布操作没有工作。我已经与一维数组这样做,它的工作。但随着二维数组,事情变得有点棘手。

I get a Segmentation Fault when I try to read the scattered data, suggesting that the scatter operation did not work. I already did this with one-dimensional arrays and it worked. But with two-dimensional arrays, things get a little trickier.

你能不能帮我找到了这个错误,请?

Could you help me find the bug, please?

感谢

推荐答案

MPI_Scatterv 需要一个指向数据和数据应在内存中连续的。你的程序是罚款的第二部分,但 MPI_Scatterv 接收指向数据指针。因此,这将是一件好事更改:

MPI_Scatterv needs a pointer to the data and the data should be contiguous in memory. Your program is fine on the second part, but MPI_Scatterv receives a pointer to pointers to data. So it would be a good thing to change for :

MPI_Scatterv(&m[0][0], sendcounts, displs, MPI_DOUBLE, &mParts[0][0], sendcounts[myrank], MPI_DOUBLE, root, MPI_COMM_WORLD);

也有几件事情,为 sendcounts displs 更改:去2D,这些计数应通过 N 成倍增加。和计数收到 MPI_Scatterv 不是了,但 sendcouts [myrank]

There are also a couple of things to change for sendcounts and displs : to go 2D, these counts should be multiplied by n. And the count of receive in MPI_Scatterv is not rows anymore, but sendcouts[myrank].

下面是最后的code:

Here is the final code :

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

#define BLOCK_LOW(id,p,n) ((id)*(n)/(p))
#define BLOCK_HIGH(id,p,n) ((id+1)*(n)/(p) - 1)
#define BLOCK_SIZE(id,p,n) ((id+1)*(n)/(p) - (id)*(n)/(p))
#define BLOCK_OWNER(index,p,n) (((p)*((index)+1)-1)/(n))

void **matrix_create(size_t m, size_t n, size_t size) {
    size_t i; 
    void **p= (void **) malloc(m*n*size+ m*sizeof(void *));
    char *c=  (char*) (p+m);
    for(i=0; i<m; ++i)
        p[i]= (void *) c+i*n*size;
    return p;
}

void matrix_print(double **M, size_t m, size_t n, char *name) {
    size_t i,j;
    printf("%s=[",name);
    for(i=0; i<m; ++i) {
        printf("\n  ");
        for(j=0; j<n; ++j)
            printf("%f  ",M[i][j]);
    }
    printf("\n];\n");
}

main(int argc, char *argv[]) {

    int npes, myrank, root = 0, n = 7, rows, i, j, *sendcounts, *displs;
    double **m, **mParts;

    MPI_Status status;
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD,&npes);
    MPI_Comm_rank(MPI_COMM_WORLD,&myrank);

    // Matrix M is generated in the root process (process 0)
    if (myrank == root) {
        m = (double**)matrix_create(n, n, sizeof(double));
        for (i = 0; i < n; ++i)
            for (j = 0; j < n; ++j)
                m[i][j] = (double)(n * i + j);
    }

    // Array containing the numbers of rows for each process
    sendcounts = malloc(n * sizeof(int));
    // Array containing the displacement for each data chunk
    displs = malloc(n * sizeof(int));
    // For each process ...
    for (j = 0; j < npes; j++) {
        // Sets each number of rows
        sendcounts[j] = BLOCK_SIZE(j, npes, n)*n;
        // Sets each displacement
        displs[j] = BLOCK_LOW(j, npes, n)*n;
    }
    // Each process gets the number of rows that he is going to get
    rows = sendcounts[myrank]/n;
    // Creates the empty matrixes for the parts of M
    mParts = (double**)matrix_create(rows, n, sizeof(double));
    // Scatters the matrix parts through all the processes
    MPI_Scatterv(&m[0][0], sendcounts, displs, MPI_DOUBLE, &mParts[0][0], sendcounts[myrank], MPI_DOUBLE, root, MPI_COMM_WORLD);

    // This is where I get the Segmentation Fault
    if (myrank == 1) matrix_print(mParts, rows, n, "mParts");

    MPI_Finalize();
}

如果您想了解更多关于二维数组和MPI,看<一个href=\"http://stackoverflow.com/questions/9269399/sending-blocks-of-2d-array-in-c-using-mpi/9271753#9271753\">here

If you want to know more about 2D arrays and MPI, look here

也看在PETSc的图书馆这里和的

Look also at the DMDA structure of the PETSc library here and there

这篇关于我如何使用MPI_Scatterv发送一个矩阵的行所有的进程?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆