MPI中的Jacobi松弛 [英] Jacobi Relaxation in MPI

查看:171
本文介绍了MPI中的Jacobi松弛的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在1个小时前提出了一个问题,但是问得不好,所以我重新提出了一个问题.

I created a question 1 hour ago but it wasn't well asked so I recreated one.

我在C语言中得到了Jacobi松弛代码:

I got a code that is Jacobi relaxation in C :

while ( error > tol && iter < iter_max ) {
    error = 0.0;

    for( int j = 1; j < n-1; j++)
    {   
        for( int i = 1; i < m-1; i++ )
        {   
            Anew[j][i] = 0.25 * ( A[j][i+1] + A[j][i-1]
                                + A[j-1][i] + A[j+1][i]);
            error = fmax( error, fabs(Anew[j][i] - A[j][i]));
        }
    }

    for( int j = 1; j < n-1; j++)
    {   
        for( int i = 1; i < m-1; i++ )
        {   
            A[j][i] = Anew[j][i];
        }
    }

    if(iter % 100 == 0) printf("%5d, %0.6f\n", iter, error);

    iter++;
}

我用:

  • 4096x4096的数组
  • iter_max = 1000
  • 错误= 1.0e-6
  • 16个核心

我已将此代码与OpenACC并行化.现在,我想使用MPI尝试了解其工作原理.但是,对于我进行的第一个实现,我的效果并不理想(新数组构造得不好).我该如何将该代码段与MPI并行化?

I have parallelized this code with OpenACC. Now, I want to use MPI to try to understand how it works. However, for first implementations I made, I haven't good results (new array is not well constructed). How can I parallelize this code section with MPI ?

推荐答案

这是我为类似情况编写的代码,您可以将其用作指导.

Here is a code I had written for a similar case and you can use it for as a guide.

do {
  iter++;

  MPI_Irecv(&old[1][0], 1, myHelloVector, nbrs[LEFT], 2, MPI_COMM_WORLD, \
            &requestFourR[LEFT]); 
  MPI_Irecv(&old[1][chunkSize[1]+1], 1, myHelloVector, nbrs[RIGHT], 1, \
            MPI_COMM_WORLD, &requestFourR[RIGHT]); 

  MPI_Irecv(&old[0][1], chunkSize[1], MPI_FLOAT, nbrs[UP], 4, \
            MPI_COMM_WORLD, &requestFourR[UP]);
  MPI_Irecv(&old[chunkSize[0]+1][1], chunkSize[1], MPI_FLOAT, \
            nbrs[DOWN], 3, MPI_COMM_WORLD, &requestFourR[DOWN]);

  MPI_Issend(&old[1][1], 1, myHelloVector, nbrs[LEFT], 1, \
            MPI_COMM_WORLD, &requestFourS[LEFT]);     
  MPI_Issend(&old[1][chunkSize[1]], 1, myHelloVector, nbrs[RIGHT], 2, \
            MPI_COMM_WORLD, &requestFourS[RIGHT]);

  MPI_Issend(&old[1][1], chunkSize[1], MPI_FLOAT, nbrs[UP], 3, \
            MPI_COMM_WORLD, &requestFourS[UP]);
  MPI_Issend(&old[chunkSize[0]][1], chunkSize[1], MPI_FLOAT, nbrs[DOWN], 4, \
            MPI_COMM_WORLD, &requestFourS[DOWN]);

  calImage(old, new, edge, chunkSize[ROWS], chunkSize[COLS]);

  for (itr = 0; itr < 4; itr++) {
    MPI_Waitany(4, &requestFourR[0], &index, &status);
    switch ( index ) {  /* status.MPI_TAG) */
      case 0: /* RIGHT */
              j = 1;
              for (i = 2; i < chunkSize[0]; i++) {
                new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+ \
                          old[i][j+1] - edge[i][j]);
              }
              break;
      case 1: /* LEFT */
              j = chunkSize[1];
              for (i = 2; i < chunkSize[0]; i++) {
                new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+ \
                          old[i][j+1] - edge[i][j]);
              }
              break;
      case 2: /* DOWN */
              i = 1;
              for (j = 2; j < chunkSize[1]; j++) {
                new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+ \
                          old[i][j+1] - edge[i][j]);
              }
              break;
      case 3: /* UP */
              i = chunkSize[0];
              for (j = 2; j < chunkSize[1]; j++) {
                new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+ \
                          old[i][j+1] - edge[i][j]);
              }
              break;
    }
  }

  i = 1; j = 1;
  new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+old[i][j+1] - \
              edge[i][j]);

  i = 1; j = chunkSize[1];
  new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+old[i][j+1] - \
              edge[i][j]);

  i = chunkSize[0]; j = 1;
  new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+old[i][j+1] - \
              edge[i][j]);

  i = chunkSize[0]; j = chunkSize[1];
  new[i][j] = 0.25f*(old[i-1][j]+old[i+1][j]+old[i][j-1]+old[i][j+1] - \
              edge[i][j]);

  MPI_Waitall(4, requestFourS, statusS);

  temp = old;
  old = new;
  new = temp;
} while(your_stopping_condition);

calImage()函数正在执行不依赖于光晕交换操作的计算.

calImage() function is doing the calculations that are not depended at the halo swap operation.

void calImage(float **image, float **newImage, float **edge, \
          int rows, int cols) {
  int i, j;

  for (i = 2; i < rows; i++) {
    for (j = 2; j < cols; j++) {
      newImage[i][j] = 0.25f * (image[i-1][j] \
                        + image[i+1][j] \
                        + image[i][j-1] \
                        + image[i][j+1] \
                        - edge[i][j]);
    }
  }
}

这篇关于MPI中的Jacobi松弛的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆