限制为MPI_Send或MPI_Recv? [英] Limits with MPI_Send or MPI_Recv?

查看:135
本文介绍了限制为MPI_Send或MPI_Recv?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们对MPI_SendMPI_Recv上的邮件大小有任何限制吗?还是受计算机限制?当我尝试发送大数据时,它无法完成. 这是我的代码:

Do we have any limits about message size on MPI_Send or MPI_Recv - or limits by computer? When I try to send large data, it can not completed. This is my code:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <math.h>
#include <string.h>

void AllGather_ring(void* data, int count, MPI_Datatype datatype,MPI_Comm communicator)
{
  int me;
  MPI_Comm_rank(communicator, &me);
  int world_size;
  MPI_Comm_size(communicator, &world_size);
  int next=me+1;
  if(next>=world_size)
      next=0;
  int prev=me-1;
  if(prev<0)
      prev=world_size-1;
  int i,curi=me;
  for(i=0;i<world_size-1;i++)
  {
     MPI_Send(data+curi*sizeof(int)*count, count, datatype, next, 0, communicator);
     curi=curi-1;
     if(curi<0)
         curi=world_size-1;
     MPI_Recv(data+curi*sizeof(int)*count, count, datatype, prev, 0, communicator, MPI_STATUS_IGNORE);
  }
}


void test(void* buff,int world_size,int count)
{
    MPI_Barrier(MPI_COMM_WORLD);
    AllGather_ring(buff,count,MPI_INT,MPI_COMM_WORLD);
    MPI_Barrier(MPI_COMM_WORLD);
    }
}
void main(int argc, char* argv[]) {
    int count = 20000;
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    MPI_Init(&argc,&argv);
    int world_rank,world_size,namelen;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int* buff=(int*) malloc(world_size*sizeof(int)*count);
      int i;
      for (i = 0; i < world_size; i++) {
          buff[i]=world_rank;
      }
    test(buff,world_size,count);
    MPI_Finalize();
}

当我尝试使用大约80000字节(40000整数)的缓冲区运行时,它停止了 (按计数= 20000 + 4个进程)

It stopped when I try to run with a buffer about 80000 bytes (40000 integers) (by count = 20000 + 4 processes)

推荐答案

您的代码不正确.您仅在各自的发送完成后才过帐收货. MPI_Send仅在发布相应的MPI_Recv之后才能保证完成,因此您会遇到经典的死锁.

You code is incorrect. You are posting the receives only after the respective sends are completed. MPI_Send is only guaranteed to complete after a corresponding MPI_Recv is posted, so you run into a classic deadlock.

它恰好适用于小消息,因为它们的处理方式不同(使用意外的消息缓冲区作为性能优化).在这种情况下,允许在发布MPI_Recv之前先完成MPI_Send.

It happens to work for small messages, because they are handled differently (using an unexpected message buffer as performance optimization). In that case MPI_Send is allowed to complete before the MPI_Recv is posted.

或者,您可以:

  • 即时发布发送或接收(MPI_IsendMPI_Irecv)以解决死锁.
  • 使用MPI_Sendrecv.
  • 使用MPI_Allgather.
  • Post immediate sends or receive (MPI_Isend, MPI_Irecv) to resolve the deadlock.
  • Use MPI_Sendrecv.
  • Use MPI_Allgather.

我推荐后者.

这篇关于限制为MPI_Send或MPI_Recv?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆