如果缓冲区大小超过256,则第二个MPI_Send挂起 [英] Second MPI_Send is hanging if buffer size is over 256

查看:128
本文介绍了如果缓冲区大小超过256,则第二个MPI_Send挂起的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

int n, j, i, i2, i3, rank, size,  rowChunk,  **cells, **cellChunk;


MPI_Status status;

MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);


if(!rank){
    printf("\nEnter board size:\n");
    fflush(stdout);
    scanf("%d", &n);

    printf("\nEnter the total iterations to play:\n");
    fflush(stdout);
    scanf("%d", &j);


    srand(3);

    rowChunk = n/size; //how many rows each process will get

    for(i=1; i<size; i++){

        MPI_Send(&n,1, MPI_INT, i, 0, MPI_COMM_WORLD);
        MPI_Send(&j,1, MPI_INT, i, 7, MPI_COMM_WORLD);
    }

    cells = (int**) malloc(n*sizeof(int*));    //create main 2D array

    for(i=0; i<n; i++){

        cells[i] = (int*) malloc(n*sizeof(int));
    }

    for(i=0; i<n; i++){
        for(i2=0; i2<n; i2++){           //fill array with random data

            cells[i][i2] = rand() % 2;
        }
    }       

    for(i=1; i<size; i++){        //send blocks of rows to each process
        for(i2=0; i2<rowChunk; i2++){ //this works for all n

            MPI_Send(cells[i2+(rowChunk*i)], n, MPI_INT, i, i2, MPI_COMM_WORLD);
        }
    }

    cellChunk = (int**) malloc(rowChunk*sizeof(int*));

    for(i=0; i<rowChunk; i++){    //declare 2D array for process zero's array chunk

        cellChunk[i] = (int*) malloc(n*sizeof(int));
    }

    for(i=0; i<rowChunk; i++){   //give process zero it's proper chunk of the array
        for(i2=0; i2<n; i2++){

            cellChunk[i][i2] = cells[i][i2];
        }
    }


    for(i3=1; i3<=j; i3++){

        MPI_Send(cellChunk[0], n, MPI_INT, size-1,1,MPI_COMM_WORLD); //Hangs here if n >256
        MPI_Send(cellChunk[rowChunk-1], n, MPI_INT, 1,2,MPI_COMM_WORLD); //also hangs if n > 256

            ... //Leaving out code that works

如果n(数组大小)小于或等于256,则此代码可完美工作.更大的代码将其挂在第一个MPI_Send上.另外,当将数组行大块发送给其他进程时,(第一个MPI_Send)其他进程也可以完美地接收其数据,即使n>256.如果缓冲区大小超过256,会导致该MPI_Send挂起? >

This code works perfectly if n (array size) is less than or equal to 256. Any greater, and it hangs on the first MPI_Send. Also, when sending out the array row chunks to the other processes, (first MPI_Send) the other processes receive their data perfectly, even though n > 256. What would cause just this MPI_Send to hang if the buffer size is over 256?

推荐答案

您将永远不会收到任何消息,因此代码将填充本地MPI缓冲区空间,然后死锁,等待对MPI_Recv(或类似内容)的调用运行.您将需要插入接收操作,以便您的消息实际在接收方上发送和处理.

You are never receiving any messages, and so the code will fill the local MPI buffer space and then deadlock waiting for an MPI_Recv (or similar) call to be run. You will need to insert receive operations so that your messages will actually be sent and processed on the receivers.

这篇关于如果缓冲区大小超过256,则第二个MPI_Send挂起的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆