MPI_Scatter - 发送二维数组的列 [英] MPI_Scatter - sending columns of 2D array

查看:2093
本文介绍了MPI_Scatter - 发送二维数组的列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想送二维数组的列,每到独立的进程。我现在有一整二维数组和我坚持MPI_Scatter。如何发送整列作为一个字段?

I want to send 2D array's columns, each to separate process. I have now one whole 2d array and I am stuck with MPI_Scatter. How to send whole columns as a field?

感谢

编辑:

我有阵 - 浮动[100] [101]

I have array - float a[100][101]

和我试图通过发送数组:

and I have tried to send array by:

float send;
MPI_Scatter ((void *)a, n, MPI_FLOAT,(void *)&send  , 1, MPI_INT,0, MPI_COMM_WORLD);

EDIT2:

我有了新的type_vector:

I have made new type_vector:

               MPI_Datatype newtype;

               MPI_Type_vector(n,       /* # column elements */
                   1,           /* 1 column only */
                   n+1,         /* skip n+1 elements */
                   MPI_FLOAT,       /* elements are float */
                   &newtype);       /* MPI derived datatype */

               MPI_Type_commit(&newtype);

和现在荫试图将其发送给我的其他进程。矩阵由彩车充满,我矩阵的N×N + 1,用于测试为n = 5,所以它是矩阵5×6,什么叫散布的是工作,我应该从其他进程的侧采取什么办法?我的意思是,如何获取数据,这是由分散发送?

and now Iam trying to send it to my other processes. Matrix is filled by floats, my matrix is n x n+1, for testing is n=5, so it is matrix 5 x 6. What call of Scatter would be working and what approach should I take from the side of a other processes? I mean, how to get data, which are sent by scatter?

推荐答案

这是非常类似于这样的问题:<一href=\"http://stackoverflow.com/questions/5371733/how-to-mpi-gatherv-columns-from-processor-where-each-process-may-send-different/5373104#5373104\">How从处理器MPI_Gatherv列,其中列的每一个过程可以发送不同的数字。问题是,列不在内存中连续的,所以你必须玩。

This is very similar to this question: How to MPI_Gatherv columns from processor, where each process may send different number of columns . The issue is that columns aren't contiguous in memory, so you have to play around.

由于一直用C的情况下,缺乏真正的多维数组,你必须对内存布局小心一点。我的认为的用C是这种情况,一个静态声明数组像

As is always the case in C, lacking real multidimensional arrays, you have to be a little careful about memory layout. I believe in C it's the case that a statically-declared array like

float a[nrows][ncols]

将在内存中连续的,所以你应该现在是好的。但是,要知道,一旦你去动态分配,这将不再是如此;你不得不一次分配中的所有数据,以确保您获得连续的数据,如:

will be contiguous in memory, so you should be alright for now. However, be aware that as soon as you go to dynamic allocation, this will no longer be the case; you'd have to allocate all the data at once to make sure that you get contiguous data, eg

float **floatalloc2d(int n, int m) {
    float *data = (float *)malloc(n*m*sizeof(float));
    float **array = (float **)calloc(n*sizeof(float *));
    for (int i=0; i<n; i++)
        array[i] = &(data[i*m]);

    return array;
}

float floatfree2d(float **array) {
    free(array[0]);
    free(array);
    return;
}

/* ... */
float **a;
nrows = 3;
ncols = 2;
a = floatalloc2d(nrows,ncols);

但我认为你现在没事。

but I think you're ok for now.

现在,你有你的二维数组这种或那种方式,你必须创建你的类型。你所描述的类型是好的,如果你只是发送一列;但这里的技巧是,如果您将多列,每列开始只有一个浮动过去previous一开始,即使列本身横跨几乎整个阵!所以,你需要移动的上界的类型,这样才能工作:

Now that you have your 2d array one way or another, you have to create your type. The type you've described is fine if you are just sending one column; but the trick here is that if you're sending multiple columns, each column starts only one float past the start of the previous one, even though the column itself spans almost the whole array! So you need to move the upper bound of the type for this to work:

       MPI_Datatype col, coltype;

       MPI_Type_vector(nrows,    
           1,                  
           ncols,         
           MPI_FLOAT,       
           &col);       

       MPI_Type_commit(&col);
       MPI_Type_create_resized(col, 0, 1*sizeof(float), &coltype);
       MPI_Type_commit(&coltype); 

会做你想要什么。注意,该接收的进程将有不同类型的比的发送的过程中,因为它们被存储列的数量较少;所以元件之间的步幅变小。

will do what you want. NOTE that the receiving processes will have different types than the sending process, because they are storing a smaller number of columns; so the stride between elements is smaller.

最后,你现在就可以做你的散射,

Finally, you can now do your scatter,

MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if (rank == 0) {
    a = floatalloc2d(nrows,ncols);
    sendptr = &(a[0][0]);
} else {
    sendptr = NULL;
}
int ncolsperproc = ncols/size;  /* we're assuming this divides evenly */
b = floatalloc(nrows, ncolsperproc);

MPI_Datatype acol, acoltype, bcol, bcoltype;

if (rank == 0) {
    MPI_Type_vector(nrows,    
               1,                  
               ncols,         
               MPI_FLOAT,       
               &acol);       

     MPI_Type_commit(&acol);
     MPI_Type_create_resized(acol, 0, 1*sizeof(float), &acoltype);
}
MPI_Type_vector(nrows,    
               1,                  
               ncolsperproc,         
               MPI_FLOAT,       
               &bcol);       

MPI_Type_commit(&bcol);
MPI_Type_create_resized(bcol, 0, 1*sizeof(float), &bcoltype);
MPI_Type_commit(&bcoltype);

MPI_Scatter (sendptr, ncolsperproc, acoltype, &(b[0][0]), ncolsperproc, bcoltype, 0, MPI_COMM_WORLD);

这篇关于MPI_Scatter - 发送二维数组的列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆