MPI_Type_create_subarray和MPI_SEND [英] MPI_Type_create_subarray and MPI_Send
问题描述
这是我的第一个问题在这里计算器。我有两个过程,一个根0和从1。从机分配一个二维数组(CHUNK_ROWS + 2,CHUNK_COLUMNS + 2),要发送[CHUNK_ROWS] [CHUNK_COLUMNS]子阵。根分配一个二维数组(行,列)并接收希望从DDD存储[0] [0]和打印子阵。结果我得到它的wrong..Why?
我知道这是一个无感code,但它只是一个简单的程序,应该帮助我在更复杂的项目。
在这里,code:
this is my first question here in stackoverflow. I have two processes, a root 0 and a slave 1. Slave allocates a 2d array (CHUNK_ROWS+2,CHUNK_COLUMNS+2) and wants to send a [CHUNK_ROWS][CHUNK_COLUMNS] subarray. Root allocates a 2d array (ROWS,COLUMNS) and receives the subarray that wants to store from ddd[0][0] and print it. The result I get it's wrong..Why? I know this is a non sense code but it's only a simple program that should help me in more complex project. Here the code:
#include <mpi.h>
#include <iostream>
using namespace std;
#define ROWS 10
#define COLUMNS 10
#define CHUNK_ROWS 5
#define CHUNK_COLUMNS 5
#define TAG 0
int** alloca_matrice(int righe, int colonne)
{
int** matrice=NULL;
int i;
// per allocare la matrice devo fare in questo modo al fine di avere le righe contigue in memoria
// e poter così utilizzare il tipo 'colonna' che definisco con MPI_Type_Vector()
matrice = (int **)malloc(righe * sizeof(int*));
if(matrice != NULL){
matrice[0] = (int *)malloc(righe*colonne*sizeof(int));
if(matrice[0]!=NULL)
for(i=1; i<righe; i++)
matrice[i] = matrice[0]+i*colonne;
else{
free(matrice);
matrice = NULL;
}
}
else{
matrice = NULL;
}
return matrice;
}
int main(int argc, char* argv[])
{
int my_id, numprocs,length,i,j;
int ndims, sizes[2],subsizes[2],starts[2];
int** DEBUG_CH;
int** ddd;
char name[BUFSIZ];
MPI_Datatype subarray;
MPI_Status status;
MPI_Init(&argc, &argv) ; // Chiamata obbligatoria di inizializzazione
MPI_Comm_rank(MPI_COMM_WORLD, &my_id) ; // Ottiene l'identificativo del processo
MPI_Comm_size(MPI_COMM_WORLD, &numprocs) ; // Ottiene quanti processi sono attivi
MPI_Get_processor_name(name, &length); // Il nome del nodo dove il processo ? in esecuzione
if(my_id==1){
//creo una sottomatrice ripulita dalle ghost cells
ndims=2;
sizes[0] = CHUNK_ROWS+2;
sizes[1] = CHUNK_COLUMNS+2;
subsizes[0] = CHUNK_ROWS;
subsizes[1] = CHUNK_COLUMNS;
starts[0] = 1;
starts[1] = 1;
MPI_Type_create_subarray(ndims,sizes,subsizes,starts,MPI_ORDER_C,MPI_INT,&subarray);
MPI_Type_commit(&subarray);
DEBUG_CH = alloca_matrice(CHUNK_ROWS+2,CHUNK_COLUMNS+2);
for(i=0; i<CHUNK_ROWS+2; i++){
for(j=0; j<CHUNK_COLUMNS+2; j++){
if(i==0 || i==CHUNK_ROWS+1 || j==0 || j==CHUNK_COLUMNS+1)
DEBUG_CH[i][j] = 5;
else
DEBUG_CH[i][j] = 1;
}
}
MPI_Send(DEBUG_CH,1,subarray,0,TAG,MPI_COMM_WORLD);
MPI_Type_free(&subarray);
}
if(my_id==0){
//creo una sottomatrice ripulita dalle ghost cells
ndims=2;
sizes[0] = ROWS;
sizes[1] = COLUMNS;
subsizes[0] = CHUNK_ROWS;
subsizes[1] = CHUNK_COLUMNS;
starts[0] = 0;
starts[1] = 0;
MPI_Type_create_subarray(ndims,sizes,subsizes,starts,MPI_ORDER_C,MPI_INT,&subarray);
MPI_Type_commit(&subarray);
ddd = alloca_matrice(ROWS,COLUMNS);
MPI_Recv(ddd[0],1,subarray,1,TAG,MPI_COMM_WORLD,&status);
MPI_Type_free(&subarray);
for(i=0; i<CHUNK_ROWS; i++){
for(j=0; j<CHUNK_COLUMNS; j++){
printf("%d ",ddd[i][j]);
}
printf("\n");
}
}
MPI_Finalize(); // Chiusura di MPI.
return 0;
}
先谢谢了。
推荐答案
Congratulazioni,您的code几乎是完美的,有一个在你就在MPI_RECV得到了MPI_SEND只是一个愚蠢的错误。
Congratulazioni, your code is almost perfect, there's just one silly mistake in the MPI_Send which you got right in the MPI_Recv.
有关发送,你有
MPI_Send(DEBUG_CH,1,subarray,0,TAG,MPI_COMM_WORLD);
而对于recv的,你有
whereas for the Recv, you have
MPI_Recv(ddd[0],1,subarray,1,TAG,MPI_COMM_WORLD,&status);
第二个是正确的。您需要发送一个数据指针开始的地方;这不是DEBUG_CH,这是一个指针的指针整数,但与放大器;(DEBUG_CH [0] [0]),或等价地,DEBUG_CH [0],当你与DDD确实
The second is right. You need to send a pointer to where the data starts; that's not DEBUG_CH, which is a pointer to a pointer to ints, but &(DEBUG_CH[0][0]), or equivalently, DEBUG_CH[0], as you did with ddd.
这篇关于MPI_Type_create_subarray和MPI_SEND的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!