具有相同类型字段的结构上的 MPI_Allreduce 是否可移植? [英] Is MPI_Allreduce on a structure with fields of the same type portable?

查看:77
本文介绍了具有相同类型字段的结构上的 MPI_Allreduce 是否可移植?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

考虑这样的事情:

typedef struct TS { 
    double a,b,c; 
} S; 

... 
S x,y; 
... 
MPI_Allreduce(&x, &y, 3, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD); 

<小时>

上面的代码是否完全可移植(不使用 MPI_Type_struct 和全部;假定结构中的所有变量都属于同一类型)?也是在各个节点上使用不同硬件的情况下吗?


Is the above code completely portable (without using MPI_Type_struct and all; all variables in the structure are assumed to be of the same type)? Also in the case when different hardware on various nodes is used?

提前致谢,杰克

推荐答案

Hristo Iliev 完全正确;C 标准允许在字段之间进行任意填充.所以不能保证这是与三个双精度数组相同的内存布局,你的reduce可能会给你带来垃圾.

Hristo Iliev's completely right; the C standard allows arbitrary padding between the fields. So there's no guarantee that this is the same memory layout as an array of three doubles, and your reduce could give you garbage.

因此,您可以在此处采取两种不同的方法.一种是忽略这个问题,因为大多数 C 编译器可能会将其视为三个连续双精度数组.我通常根本不会提到这个作为一个选项,除非在这种情况下很容易测试假设;在您的代码中,您可以拥有

So there's two different approaches you could take here. One is to ignore the problem, as most C compilers probably will treat this as an array of three contiguous doubles. I normally wouldn't mention this at all as even an option except that in this case it's so easy to test the assumption; in your code you can have

assert ( offsetof(S,b) == sizeof(double) );
assert ( offsetof(S,c) == 2*sizeof(double) );

如果你的代码通过断言继续,你就很好.(请注意,这仍然不能保证其中两个结构体的数组等效于 6 个连续双精度数组...)

and if your code proceeds through the asserts you're good. (Note that this still doesn't guarantee that an array of two of these structs is equivalent to an array of 6 contiguous doubles...)

第二种方法是自己创建结构并减少操作以确保安全.真的,这并不太难,然后你知道它会起作用,所以这真的是要走的路;然后您可以安全地将该类型用于任何其他操作:

The second way is to create the structure and reduce operation yourself to be safe. And really, it's not too difficult, and then you know it'll work, so that's really the way to go; and then you can use that type for any other operations safely:

#include <stdio.h>
#include <stddef.h>
#include <mpi.h>

typedef struct TS {
    double a,b,c;
} S;

/* our reduction operation */
void sum_struct_ts(void *in, void *inout, int *len, MPI_Datatype *type){
    /* ignore type, just trust that it's our struct type */

    S *invals    = in;
    S *inoutvals = inout;

    for (int i=0; i<*len; i++) {
        inoutvals[i].a  += invals[i].a;
        inoutvals[i].b  += invals[i].b;
        inoutvals[i].c  += invals[i].c;
    }

    return;
}

void defineStruct(MPI_Datatype *tstype) {
    const int count = 3;
    int          blocklens[count];
    MPI_Datatype types[count];
    MPI_Aint     disps[count];

    for (int i=0; i < count; i++) {
        types[i] = MPI_DOUBLE;
        blocklens[i] = 1;
    }

    disps[0] = offsetof(S,a);
    disps[1] = offsetof(S,b);
    disps[2] = offsetof(S,c);

    MPI_Type_create_struct(count, blocklens, disps, types, tstype);
    MPI_Type_commit(tstype);
}

int main (int argc, char **argv) {

    int rank, size;
    MPI_Datatype structtype;
    MPI_Op       sumstruct;
    S   local, global;

    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    defineStruct(&structtype);
    MPI_Op_create(sum_struct_ts, 1, &sumstruct);

    local.a = rank;
    local.b = 2*rank;
    local.c = 3*rank;

    MPI_Reduce(&local, &global, 1, structtype, sumstruct, 0, MPI_COMM_WORLD);

    if (rank == 0) {
        printf("global.a = %lf; expected %lf\n", global.a, 1.*size*(size-1)/2);
        printf("global.b = %lf; expected %lf\n", global.b, 2.*size*(size-1)/2);
        printf("global.c = %lf; expected %lf\n", global.c, 3.*size*(size-1)/2);
    }

    MPI_Finalize();
    return 0;
}

跑步给予

$ mpicc -o foo foo.c -std=c99

$ mpirun -np 1 ./foo
global.a = 0.000000; expected 0.000000
global.b = 0.000000; expected 0.000000
global.c = 0.000000; expected 0.000000

$ mpirun -np 2 ./foo
global.a = 1.000000; expected 1.000000
global.b = 2.000000; expected 2.000000
global.c = 3.000000; expected 3.000000

$ mpirun -np 3 ./foo
global.a = 3.000000; expected 3.000000
global.b = 6.000000; expected 6.000000
global.c = 9.000000; expected 9.000000

$ mpirun -np 12 ./foo
global.a = 66.000000; expected 66.000000
global.b = 132.000000; expected 132.000000
global.c = 198.000000; expected 198.000000

这篇关于具有相同类型字段的结构上的 MPI_Allreduce 是否可移植?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆