MPI发送和接收不能与大于8182的两倍一起使用 [英] MPI Send and Receive don't work with more then 8182 double

查看:145
本文介绍了MPI发送和接收不能与大于8182的两倍一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用以下代码时遇到了麻烦:

I'm having some troubles with the following code:

    int main(int argc, char *argv[]){

int id, p, n, ln, i, j, retCode;
double *buffer;

MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &p);
MPI_Comm_rank(MPI_COMM_WORLD, &id);


n = strtol(argv[1],NULL,10); // total number of elements to be distributed

ln = n/p;   // local number of elements

buffer = (double*)calloc(ln, sizeof(double));

if (id == p-1)  // Process p-1 send to other processes
{
    for (i=0; i< p-1; i++)
    {
        fprintf(stdout, "Process %d is sending %d elements to process %d\n", p-1, ln, i);
        retCode = MPI_Ssend (buffer, ln, MPI_DOUBLE, i, 0, MPI_COMM_WORLD);

        if(retCode)
            fprintf(stdout, "MPISend error at file %s, line %d  code %d\n", __FILE__, __LINE__, retCode);

        fprintf(stdout, "Process %d completed sending to process %d\n", p-1, i);

    }

} 
else    // other processes receive from process p-1
{
    fprintf(stdout, "Process %d is receiving %d elements from process %d\n", id, ln,p-1);
    retCode = MPI_Recv (buffer, ln, MPI_DOUBLE, p-1, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
    if(retCode)
        fprintf(stdout, "MPI_Recv error at file %s, line %d  code %d\n", __FILE__, __LINE__, retCode);
    fprintf(stdout, "Process %d received from process %d\n", id, p-1);
}
free(buffer);
MPI_Finalize(); 
return 0;
}

这个想法是用过程p-1打开一个数据集,然后将其分发给其余的过程.当变量ln(元素的本地数量)小于8182时,此解决方案有效.当我增加元素的数量时,出现以下错误:

The idea is to open a dataset with process p-1 and then to distribute it to the remaining processes. This solution works when the variable ln (local number of elements) is less than 8182. When I increase the number of elements I've the following error:

    mpiexec -np 2   ./sendreceive 16366
    Process 0 is receiving 8183 elements from process 1
    Process 1 is sending 8183 elements to process 0
    Fatal error in MPI_Recv: Other MPI error, error stack:
    MPI_Recv(224)...................: MPI_Recv(buf=0x2000590, count=8183,         MPI_DOUBLE, src=1, tag=MPI_ANY_TAG, MPI_COMM_WORLD, status=0x1) failed
    PMPIDI_CH3I_Progress(623).......: fail failed
    pkt_RTS_handler(317)............: fail failed
    do_cts(662).....................: fail failed
    MPID_nem_lmt_dcp_start_recv(288): fail failed
    dcp_recv(154)...................: Internal MPI error!  cannot read from remote process

出了什么问题?

推荐答案

我猜想,如果您使用MPI_Send而不是MPI_Ssend,该代码是否有效? 如果您尝试使用其他通信设备,是否可以使用?

I guess that the code works if you use MPI_Send instead of MPI_Ssend? Does it work if you try to use another communication device?

如果至少对这些问题之一是肯定的,那么我将尝试检查这是否是您使用的MPI实现的已知错误.

If the answer is yes to at least one of these questions then I would try to check if this is a known bug of the MPI implementation you use.

这篇关于MPI发送和接收不能与大于8182的两倍一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆