创建动态大小的MPI文件视图 [英] Creating dynamically sized MPI file views

查看:88
本文介绍了创建动态大小的MPI文件视图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用集体MPI I/O写出一个二进制文件.我的计划是创建一个类似于

I would like to write out a binary file using collective MPI I/O. My plan is to create an MPI derived type analogous to

struct soln_dynamic_t
{
    int int_data[2];
    double *u;   /* Length constant for all instances of this struct */
};

然后,每个处理器都基于派生的类型创建一个视图,并将其写入该视图.

Each processor then creates a view based on the derived type, and writes into the view.

对于用u[10]替换*u的情况,我都可以使用(请参见下面的完整代码),但是最终,我想为u提供一个动态长度数组. (如果重要的话,soln_dynamic_t的所有实例的长度将是固定的,但是在编译时未知.)

I have this all working for the case in which *u is replaced with u[10] (see complete code below), but ultimately, I'd like to have a dynamic length array for u. (In case it matters, the length will be fixed for all instances of soln_dynamic_t for any run, but not known at compile time.)

处理此问题的最佳方法是什么?

What is the best way to handle this?

我已经阅读了几篇关于为什么不能使用soln_dynamic_t的文章 直接作为MPI结构.问题是不能保证处理器在u[0]int_data[0]之间具有相同的偏移量. (是吗?)

I have read several posts on why I can't use soln_dynamic_t directly as an MPI structure. The problem is that processors are not guaranteed to have the same offset between u[0] and int_data[0]. (Is that right?)

另一方面,结构

struct soln_static_t
{
    int int_data[2];
    double u[10];     /* fixed at compile time */
};

之所以起作用,是因为偏移量在各个处理器之间保证是相同的.

works because the offsets are guaranteed to be the same across processors.

我考虑了几种方法:

  • 基于手动定义的偏移量等来创建视图,而不是使用派生类型.

  • Create the view based on manually defined offsets, etc, rather than using a derived type.

基于另一个MPI类型的MPI结构,即``* u`(允许吗?)的连续类型

Base the MPI structure on another MPI type, i.e. an contiguous type for ``*u` (is that allowed?)

我猜想必须有一种标准的方法来做到这一点.任何建议都将非常有帮助.

I am guessing there must be a standard way to do this. Any suggestions would be very helpful.

关于此问题的其他几篇文章也很有帮助,尽管它们主要处理通讯问题,而不是文件I/O.

Several other posts on this issue have been helpful, although they mostly deal with communication and not file I/O.

这是完整的代码::

#include <mpi.h>

typedef struct 
{
    int int_data[2];
    double u[10];  /* Make this a dynamic length (but fixed) */
} soln_static_t;


void build_soln_type(int n, int* int_data, double *u, MPI_Datatype *soln_t)
{
    int block_lengths[2] = {2,n};
    MPI_Datatype typelist[2] = {MPI_INT, MPI_DOUBLE};

    MPI_Aint disp[2], start_address, address;    
    MPI_Address(int_data,&start_address);
    MPI_Address(u,&address);
    disp[0] = 0;
    disp[1] = address-start_address;

    MPI_Datatype tmp_type;
    MPI_Type_create_struct(2,block_lengths,disp,typelist,&tmp_type);

    MPI_Aint extent;
    extent = block_lengths[0]*sizeof(int) + block_lengths[1]*sizeof(double);
    MPI_Type_create_resized(tmp_type, 0, extent, soln_t);
    MPI_Type_commit(soln_t);
}

void main(int argc, char** argv)
{
    MPI_File   file;
    int globalsize, localsize, starts, order;

    MPI_Datatype localarray, soln_t;
    int rank, nprocs, nsize = 10;  /* must match size in struct above */

    /* --- Initialize MPI */
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);

    /* --- Set up data to write out */
    soln_static_t data;
    data.int_data[0] = nsize;
    data.int_data[1] = rank;
    data.u[0] = 3.14159;  /* To check that data is written as expected */
    build_soln_type(nsize, data.int_data, data.u, &soln_t);

    MPI_File_open(MPI_COMM_WORLD, "bin.out", 
                  MPI_MODE_CREATE|MPI_MODE_WRONLY,
                  MPI_INFO_NULL, &file);

    /* --- Create file view for this processor */
    globalsize = nprocs;  
    localsize = 1;
    starts = rank;
    order = MPI_ORDER_C;

    MPI_Type_create_subarray(1, &globalsize, &localsize, &starts, order, 
                             soln_t, &localarray);
    MPI_Type_commit(&localarray);

    MPI_File_set_view(file, 0, soln_t, localarray, 
                           "native", MPI_INFO_NULL);

    /* --- Write data into view */
    MPI_File_write_all(file, data.int_data, 1, soln_t, MPI_STATUS_IGNORE);

    /* --- Clean up */
    MPI_File_close(&file);

    MPI_Type_free(&localarray);
    MPI_Type_free(&soln_t);

    MPI_Finalize();
}

推荐答案

由于soln_dynamic_t类型的u数组的大小在运行时是已知的,此后将不会更改,因此我建议使用其他方法.方法.

Since the size of the u array of the soln_dynamic_t type is known at runtime and will not change after that, I'd rather suggest an other approach.

基本上,您将所有连续的数据存储在内存中:

Basically, you store all the data contiguous in memory :

typedef struct
{
    int int_data[2];
    double u[];  /* Make this a dynamic length (but fixed) */
} soln_dynamic_t;

然后您必须手动分配此结构

Then you have to manually allocate this struct

soln_dynamic_t * alloc_soln(int nsize, int count) {
    return (soln_dynamic_t *)calloc(offsetof(soln_dynamic_t, u)+nsize*sizeof(double), count);
}

请注意,由于在编译时大小未知,因此无法直接访问soln_dynamic_t数组.相反,您必须手动计算指针.

Note you cannot directly access an array of soln_dynamic_t because the size is unknown at compile time. Instead, you have to manually calculate the pointers.

soln_dynamic_t *p = alloc_soln(10, 2);
p[0].int_data[0] = 1;  // OK
p[0].u[0] = 2;         // OK
p[1].int_data[0] = 3;  // KO ! since sizeof(soln_dynamic_t) is unknown at compile time.

这是程序的完整重写版本

Here is the full rewritten version of your program

#include <mpi.h>
#include <malloc.h>

typedef struct 
{
    int int_data[2];
    double u[];  /* Make this a dynamic length (but fixed) */
} soln_dynamic_t;


void build_soln_type(int n, MPI_Datatype *soln_t)
{
    int block_lengths[2] = {2,n};
    MPI_Datatype typelist[2] = {MPI_INT, MPI_DOUBLE};
    MPI_Aint disp[2];

    disp[0] = offsetof(soln_dynamic_t, int_data);
    disp[1] = offsetof(soln_dynamic_t, u);

    MPI_Datatype tmp_type;
    MPI_Type_create_struct(2,block_lengths,disp,typelist,&tmp_type);

    MPI_Aint extent;
    extent = offsetof(soln_dynamic_t, u) + block_lengths[1]*sizeof(double);
    MPI_Type_create_resized(tmp_type, 0, extent, soln_t);
    MPI_Type_free(&tmp_type);
    MPI_Type_commit(soln_t);
}

soln_dynamic_t * alloc_soln(int nsize, int count) {
    return (soln_dynamic_t *)calloc(offsetof(soln_dynamic_t, u) + nsize*sizeof(double), count);
}

int main(int argc, char** argv)
{
    MPI_File   file;
    int globalsize, localsize, starts, order;

    MPI_Datatype localarray, soln_t;
    int rank, nprocs, nsize = 10;  /* must match size in struct above */

    /* --- Initialize MPI */
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);

    /* --- Set up data to write out */
    soln_dynamic_t *data = alloc_soln(nsize,1);
    data->int_data[0] = nsize;
    data->int_data[1] = rank;
    data->u[0] = 3.14159;  /* To check that data is written as expected */
    build_soln_type(nsize, &soln_t);

    MPI_File_open(MPI_COMM_WORLD, "bin2.out", 
                  MPI_MODE_CREATE|MPI_MODE_WRONLY,
                  MPI_INFO_NULL, &file);

    /* --- Create file view for this processor */
    globalsize = nprocs;  
    localsize = 1;
    starts = rank;
    order = MPI_ORDER_C;

    MPI_Type_create_subarray(1, &globalsize, &localsize, &starts, order, 
                             soln_t, &localarray);
    MPI_Type_commit(&localarray);

    MPI_File_set_view(file, 0, soln_t, localarray, 
                           "native", MPI_INFO_NULL);

    /* --- Write data into view */
    MPI_File_write_all(file, data, 1, soln_t, MPI_STATUS_IGNORE);

    /* --- Clean up */
    MPI_File_close(&file);

    MPI_Type_free(&localarray);
    MPI_Type_free(&soln_t);

    MPI_Finalize();
    return 0;
}

这篇关于创建动态大小的MPI文件视图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆