发送和接收二维数组通过MPI [英] Sending and receiving 2D array over MPI

查看:4046
本文介绍了发送和接收二维数组通过MPI的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想解决的问题是:

C ++的串行code我有在大型2D矩阵单位计算。为了优化这个过程中,我要拆分这家大型2D矩阵和运行4个节点(说)使用MPI。各节点之间发生的唯一通信是在每个时间步的端共享边缘的值。每个节点股份边缘阵列数据,A [1] [J],与它的邻居。

The C++ serial code I have computes across a large 2D matrix. To optimize this process, I wish to split this large 2D matrix and run on 4 nodes (say) using MPI. The only communication that occurs between nodes is the sharing of edge values at the end of each time step. Every node shares the edge array data, A[i][j], with its neighbor.

if (myrank == 0)
{
 for (i= 0 to x)
 for (y= 0 to y)
 {
  C++ CODE IMPLEMENTATION 
  .... 
  MPI_SEND(A[x][0], A[x][1], A[x][2], Destination= 1.....)
  MPI_RECEIVE(B[0][0], B[0][1]......Sender = 1.....)
  MPI_BARRIER
}

if (myrank == 1)
{
for (i = x+1 to xx)
for (y = 0 to y)
{
 C++ CODE IMPLEMENTATION
 ....
 MPI_SEND(B[x][0], B[x][1], B[x][2], Destination= 0.....)
 MPI_RECEIVE(A[0][0], A[0][1]......Sender = 1.....)
 MPI BARRIER
}

我想知道我的做法是正确的,也将AP preciate其他MPI函数的指导意见也考虑实施。

I wanted to know if my approach is correct and also would appreciate any guidance on other MPI functions too look into for implementation.

谢谢, 阿什温。

推荐答案

只是为了扩大乔尔的点了一下:

Just to amplify Joel's points a bit:

这又容易得多,如果你分配你的阵列以使它们是连续的(东西C'S多维数组不给你自动:)

This goes much easier if you allocate your arrays so that they're contiguous (something C's "multidimensional arrays" don't give you automatically:)

int **alloc_2d_int(int rows, int cols) {
    int *data = (int *)malloc(rows*cols*sizeof(int));
    int **array= (int **)malloc(rows*sizeof(int*));
    for (int i=0; i<rows; i++)
        array[i] = &(data[cols*i]);

    return array;
}

/*...*/
int **A;
/*...*/
A = alloc_2d_init(N,M);

然后,你可以发送和整个N×M个阵临危与

Then, you can do sends and recieves of the entire NxM array with

MPI_Send(&(A[0][0]), N*M, MPI_INT, destination, tag, MPI_COMM_WORLD);

当您完成后,释放与

and when you're done, free the memory with

free(A[0]);
free(A);

此外, MPI_RECV 是阻塞接受她,而 MPI_SEND 可以是阻塞发送。有一件事,这意味着,按照乔尔的一点,就是你肯定不需要壁垒。此外,它意味着如果你有一个发送/免费获赠图案上面,你可以让自己陷入僵持局面 - 每个人都送,没有人recieving。更安全的是:

Also, MPI_Recv is a blocking recieve, and MPI_Send can be a blocking send. One thing that means, as per Joel's point, is that you definately don't need Barriers. Further, it means that if you have a send/recieve pattern as above, you can get yourself into a deadlock situation -- everyone is sending, no one is recieving. Safer is:

if (myrank == 0) {
   MPI_Send(&(A[0][0]), N*M, MPI_INT, 1, tagA, MPI_COMM_WORLD);
   MPI_Recv(&(B[0][0]), N*M, MPI_INT, 1, tagB, MPI_COMM_WORLD, &status);
} else if (myrank == 1) {
   MPI_Recv(&(A[0][0]), N*M, MPI_INT, 0, tagA, MPI_COMM_WORLD, &status);
   MPI_Send(&(B[0][0]), N*M, MPI_INT, 0, tagB, MPI_COMM_WORLD);
}

另外,更普遍的,方法是使用 MPI_Sendrecv

int *sendptr, *recvptr;
int neigh = MPI_PROC_NULL;

if (myrank == 0) {
   sendptr = &(A[0][0]);
   recvptr = &(B[0][0]);
   neigh = 1;
} else {
   sendptr = &(B[0][0]);
   recvptr = &(A[0][0]);
   neigh = 0;
}
MPI_Sendrecv(sendptr, N*M, MPI_INT, neigh, tagA, recvptr, N*M, MPI_INT, neigh, tagB, MPI_COMM_WORLD, &status);

或非阻塞发送和/或临危。

or nonblocking sends and/or recieves.

这篇关于发送和接收二维数组通过MPI的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆