通过 MPI 发送和接收二维数组 [英] Sending and receiving 2D array over MPI

查看:64
本文介绍了通过 MPI 发送和接收二维数组的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试解决的问题如下:

The issue I am trying to resolve is the following:

我在大型二维矩阵上计算的 C++ 串行代码.为了优化这个过程,我希望分割这个大的 2D 矩阵并使用 MPI 在 4 个节点上运行(比如).节点之间发生的唯一通信是在每个时间步长结束时共享边值.每个节点与其邻居共享边数组数据 A[i][j].

The C++ serial code I have computes across a large 2D matrix. To optimize this process, I wish to split this large 2D matrix and run on 4 nodes (say) using MPI. The only communication that occurs between nodes is the sharing of edge values at the end of each time step. Every node shares the edge array data, A[i][j], with its neighbor.

基于对 MPI 的阅读,我有以下方案要实现.

Based on reading about MPI, I have the following scheme to be implemented.

if (myrank == 0)
{
 for (i= 0 to x)
 for (y= 0 to y)
 {
  C++ CODE IMPLEMENTATION 
  .... 
  MPI_SEND(A[x][0], A[x][1], A[x][2], Destination= 1.....)
  MPI_RECEIVE(B[0][0], B[0][1]......Sender = 1.....)
  MPI_BARRIER
}

if (myrank == 1)
{
for (i = x+1 to xx)
for (y = 0 to y)
{
 C++ CODE IMPLEMENTATION
 ....
 MPI_SEND(B[x][0], B[x][1], B[x][2], Destination= 0.....)
 MPI_RECEIVE(A[0][0], A[0][1]......Sender = 1.....)
 MPI BARRIER
}

我想知道我的方法是否正确,并且也很感激有关其他 MPI 函数的任何指导,也希望能够实现.

I wanted to know if my approach is correct and also would appreciate any guidance on other MPI functions too look into for implementation.

谢谢,阿什温.

推荐答案

只是为了放大 Joel 的观点:

Just to amplify Joel's points a bit:

如果您分配数组以使它们连续(C 的多维数组"不会自动给您:),这会容易得多:)

This goes much easier if you allocate your arrays so that they're contiguous (something C's "multidimensional arrays" don't give you automatically:)

int **alloc_2d_int(int rows, int cols) {
    int *data = (int *)malloc(rows*cols*sizeof(int));
    int **array= (int **)malloc(rows*sizeof(int*));
    for (int i=0; i<rows; i++)
        array[i] = &(data[cols*i]);

    return array;
}

/*...*/
int **A;
/*...*/
A = alloc_2d_init(N,M);

然后,您可以使用

MPI_Send(&(A[0][0]), N*M, MPI_INT, destination, tag, MPI_COMM_WORLD);

当你完成后,用

free(A[0]);
free(A);

另外,MPI_Recv 是阻塞接收,MPI_Send 可以是阻塞发送.根据乔尔的观点,这意味着你绝对不需要屏障.此外,这意味着如果您有上述的发送/接收模式,您可能会陷入僵局——每个人都在发送,没有人在接收.更安全的是:

Also, MPI_Recv is a blocking recieve, and MPI_Send can be a blocking send. One thing that means, as per Joel's point, is that you definately don't need Barriers. Further, it means that if you have a send/recieve pattern as above, you can get yourself into a deadlock situation -- everyone is sending, no one is recieving. Safer is:

if (myrank == 0) {
   MPI_Send(&(A[0][0]), N*M, MPI_INT, 1, tagA, MPI_COMM_WORLD);
   MPI_Recv(&(B[0][0]), N*M, MPI_INT, 1, tagB, MPI_COMM_WORLD, &status);
} else if (myrank == 1) {
   MPI_Recv(&(A[0][0]), N*M, MPI_INT, 0, tagA, MPI_COMM_WORLD, &status);
   MPI_Send(&(B[0][0]), N*M, MPI_INT, 0, tagB, MPI_COMM_WORLD);
}

另一种更通用的方法是使用MPI_Sendrecv:

Another, more general, approach is to use MPI_Sendrecv:

int *sendptr, *recvptr;
int neigh = MPI_PROC_NULL;

if (myrank == 0) {
   sendptr = &(A[0][0]);
   recvptr = &(B[0][0]);
   neigh = 1;
} else {
   sendptr = &(B[0][0]);
   recvptr = &(A[0][0]);
   neigh = 0;
}
MPI_Sendrecv(sendptr, N*M, MPI_INT, neigh, tagA, recvptr, N*M, MPI_INT, neigh, tagB, MPI_COMM_WORLD, &status);

或非阻塞发送和/或接收.

or nonblocking sends and/or recieves.

这篇关于通过 MPI 发送和接收二维数组的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆