使用MPI_Send和MPI_Recv发送大型std :: vector不能完成 [英] Sending large std::vector using MPI_Send and MPI_Recv doesn't complete

查看:430
本文介绍了使用MPI_Send和MPI_Recv发送大型std :: vector不能完成的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用MPI发送std :: vector.当向量较小时,此方法效果很好,但是当向量较大时(向量中〜15k的两倍多),该方法不起作用.尝试发送20k双精度的向量时,程序只是坐在那里,CPU占用率为100%.

I'm trying to send a std::vector using MPI. This works fine when the the vector is small, but just doesn't work when the vector is large (more than ~15k doubles in the vector). When trying to send a vector with 20k doubles, the program just sits there with the CPU at 100%.

这是一个最小的例子

#include <vector>
#include <mpi.h>

using namespace std;

vector<double> send_and_receive(vector<double> &local_data, int n, int numprocs, int my_rank) {
    MPI_Send(&local_data[0], n, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);

    if (my_rank == 0) {
        vector<double> global_data(numprocs*n);
        vector<double> temp(n);
        for (int rank = 0; rank < numprocs; rank++) {
            MPI_Recv(&temp[0], n, MPI_DOUBLE, rank, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            for (int i = 0; i < n; i++) {
                global_data[rank*n + i] = temp[i];
            }
        }
        return global_data;
    }
    return vector<double>();
}

int main(int args, char *argv[]) {
    int my_rank, numprocs;
    // MPI initialization
    MPI_Init (&args, &argv);
    MPI_Comm_rank (MPI_COMM_WORLD, &my_rank);
    MPI_Comm_size (MPI_COMM_WORLD, &numprocs);

    int n = 15000;
    vector<double> local_data(n);

    for (int i = 0; i < n; i++) {
        local_data[i] = n*my_rank + i;
    }

    vector<double> global_data = send_and_receive(local_data, n, numprocs, my_rank);

    MPI_Finalize();

    return 0;
}

我使用

mpic++ main.cpp

并使用

mpirun -n 2 a.out

当我使用n = 15000运行时,程序成功完成,但是使用n = 17000n = 20000时,它从未完成,并且两个CPU的位置都为100%,直到我强制关闭程序.

When I run with n = 15000 the program completes successfully, but with n = 17000 or n = 20000 it never finishes, and the two CPU's sit at 100% until I force close the program.

有人知道这个问题可能是什么吗?

Does anyone know what the problem could be?

推荐答案

MPI_Send是个有趣的电话.如果有足够的内部缓冲区来存储输入,它会可能返回-它唯一保证的是MPI不再需要输入缓冲区.但是,如果没有足够的内部缓冲区空间,则该调用将阻塞,直到相反的MPI_Recv调用开始接收数据.看到这是怎么回事?由于缓冲区空间不足,两个进程都将发布MPI_Send该块.调试此类问题时,将MPI_Send替换为MPI_Ssend很有帮助.

MPI_Send is a funny call. If there is enough internal buffers to store the input, it may return - the only guarantee it makes is that input buffer is not going to be needed further by MPI. However, if there isn't enough internal buffer space, the call will block until the opposite MPI_Recv call begins to receive data. See where this is going? Both processes post MPI_Send that block due to insufficient buffer space. When debugging issues like that, it helps to replace MPI_Send with MPI_Ssend.

您可能的解决方案是:

  • 使用缓冲发送,MPI_Bsend.
  • 使用MPI_Sendrecv
  • 备用send/recv对,这样每个发送都有一个匹配的recv(例如,奇数proc发送,偶数recvs,反之亦然).
  • 使用非阻塞发送,MPI_Isend
  • Use buffered send, MPI_Bsend.
  • Use MPI_Sendrecv
  • Alternate send/recv pair so that each send has a matching recv (e.g. odd proc sends, even recvs, then vice versa).
  • Use non-blocking send, MPI_Isend

请参见 http://www.netlib.org/utk/papers /mpi-book/node39.html

这篇关于使用MPI_Send和MPI_Recv发送大型std :: vector不能完成的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆