MPI广播动态二维数组到其他处理器 [英] MPI Broadcasting dynamic 2D array to other processors
问题描述
我已经搜查了许多解释,但我想我无法弄清楚我该怎么办这种情况。我想要做这样的事情:
使用主处理器,我创建一个动态二维数组。然后我想;
1 - 发送此阵给其他处理器。和每个处理器打印此二维数组
2 - 发送此阵给他人的一部分。和每个处理器打印自己的部分到屏幕上。
例如;我有二维数组11 * 11和4处理器。等级0是高手。其他人是奴隶。对于第一种情况,我想给所有阵列等级1,等级2和等级3。而对于第二种情况,我想分享行的奴隶。 11/3 = 3。因此等级1需要3行,等级2需要3行,3级需要5行。
下面是我的code:
INT processorID;
INT numberOfProcessors;
INT主(INT ARGC,CHAR *的argv []){ MPI_INIT(安培; ARGC,&安培; argv的);
MPI_Comm_size(MPI_COMM_WORLD,&安培; numberOfProcessors);
MPI_Comm_rank(MPI_COMM_WORLD,&安培; processorID); 双**阵列; 如果(MASTER){
阵列=(双**)的malloc(11 * sizeof的(双*));
对于(i = 0; I< 11;我++){
数组[我] =(双*)malloc的(11 * sizeof的(双));
} 对于(i = 0; I< 11;我++){
为(J = 0; J&下; 11; J ++){
数组[I] [J] = I *焦耳;
} }
}
MPI_Bcast(阵列,11 * 11,MPI_DOUBLE,0,MPI_COMM_WORLD);
如果(从){
对于(i = 0; I< 11;我++){
为(J = 0; J&下; 11; J ++){
的printf(%F,数组[I] [J]);
} }
} MPI_Finalize();
返回0;
}
根据这些链接
MPI_Bcast一个动态二维数组;我需要创建我的数组;
如果(MASTER){ 阵列=(双**)的malloc(121 * sizeof的(双)) 对于(i = 0; I< 11;我++){
为(J = 0; J&下; 11; J ++){
数组[I] [J] = I *焦耳; //这是行不通的。
}
}
}
但如果我这样做,我不能初始化数组中的每个成员。内蒙古for循环不工作。我找不到任何办法来解决它。
和我的第二个问题,我跟着这个链接发送使用MPI 用C二维数组块。我想我需要改变,如果(从)的内部。我应该创建2D子阵列的每个从处理器。而我需要使用MPI_Scatterv。但我不能完全理解。
INT的main(){ ...
... MPI_Scatterv()//什么是必须在这里?
如果(从){
如果(processorID = numberOfProcessor-1){
子阵=(双**)的malloc(5 * sizeof的(双*)); //为去年的处理器行怎么一回事,因为数为5
对于(i = 0; I< 11;我++){
数组[我] =(双*)malloc的(11 * sizeof的(双));
}
}
其他{
子阵=(双**)的malloc(3 * sizeof的(双*));
对于(i = 0; I< 11;我++){
数组[我] =(双*)malloc的(11 * sizeof的(双));
}
}
}
}
C不真有多维数组。我会建议存储在一个常规的1D缓冲你的价值观,然后计算从一维数值正确的指数。像这样:
双击*数据=(双*)malloc的(的sizeof(双)* 11 * 11);//现在,访问数据的[I] [J],简单地做:
数据[J + I * 11] = ...; //这个地图2D指数为1D
这将免去你所有层次的malloc
-ing的麻烦,它可以很容易地传递给MPI的API。
I have searched many explanations, but I think I could not figure out how I can do this situation. I want to do something like that: Using master processor, I am creating a dynamic 2D array. Then I want to;
1- send this array to other processors. And each processor prints this 2D array 2- send part of this array to others. And each processor prints their part to the screen.
For example; I have 2D array 11*11 and 4 processors. rank 0 is master. Others are slaves. For the first situation, I want to send all array to rank 1, rank 2, and rank 3. And for the second situation, I want to share rows to slaves. 11/3 = 3. Therefore rank 1 takes 3 rows, rank 2 takes 3 rows, and rank 3 takes 5 rows.
Here is my code:
int processorID;
int numberOfProcessors;
int main(int argc, char* argv[]){
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD ,&numberOfProcessors);
MPI_Comm_rank(MPI_COMM_WORLD ,&processorID);
double **array;
if(MASTER){
array = (double**) malloc(11*sizeof(double *));
for(i=0; i<11; i++){
array[i] = (double *) malloc(11*sizeof(double));
}
for(i=0; i<11; i++){
for(j=0; j<11; j++){
array[i][j] = i*j;
}
}
}
MPI_Bcast(array, 11*11, MPI_DOUBLE, 0, MPI_COMM_WORLD);
if(SLAVE){
for(i=0; i<11; i++){
for(j=0; j<11; j++){
printf("%f ", array[i][j]);
}
}
}
MPI_Finalize();
return 0;
}
according to these link ; MPI_Bcast a dynamic 2d array ; I need to create my array as;
if (MASTER){
array = (double**) malloc(121*sizeof(double))
for(i=0; i<11; i++){
for(j=0; j<11; j++){
array[i][j] = i*j; // this is not working.
}
}
}
but if I do this, I cannot initialize each member in the array. Inner for loops are not working. I could not find any way to solve it.
And for my second question, I followed this link sending blocks of 2D array in C using MPI . I think I need to change the inside of if(SLAVE). I should create 2D subArrays for each slave processor. And I need to use MPI_Scatterv. But I could not understand completely.
int main() {
...
...
MPI_Scatterv() // what must be here?
if(SLAVE){
if(processorID = numberOfProcessor-1){
subArray = (double**) malloc(5*sizeof(double *)); // beacuse number of row for last processor is 5
for(i=0; i<11; i++){
array[i] = (double *) malloc(11*sizeof(double));
}
}
else {
subArray = (double**) malloc(3*sizeof(double *));
for(i=0; i<11; i++){
array[i] = (double *) malloc(11*sizeof(double));
}
}
}
}
C does not really have multidimensional arrays. I would recommend storing your values in a regular 1D buffer, and then calculating correct indices from 1D values. Like so:
double* data = (double*)malloc(sizeof(double)*11*11);
// Now, to access data[i][j], simply do:
data[j + i*11] = ...; // this "maps" 2D indices into 1D
This will spare you all the trouble with the hierarchical malloc
-ing, and it can easily be passed to the MPI APIs.
这篇关于MPI广播动态二维数组到其他处理器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!