mpi中darray和subarray有什么区别? [英] What is the difference between darray and subarray in mpi?

查看:363
本文介绍了mpi中darray和subarray有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个并行编程类的并行I / O项目,我必须实现派生数据类型。我没有清楚地理解darray和子阵列之间的区别。 darray可以从动态分配的数组派生吗?主要区别是什么?

I have a parallel I/O project for parallel programming class, and I have to implement derived datatypes. I didn't clearly understand the difference between darray and subarray. Can darray be derived from dynamically allocated arrays or not? And what is the main difference?

推荐答案

Subarray允许您描述更大的多维数组的单个块/切片。如果每个MPI任务都有一个大型全局数组的单个切片/块,(或者如果您正在任务之间传递本地数组的块),那么MPI_Type_create_subarray就是要走的路;语法非常简单。为了在常规网格上解决像PDE这样的问题,这种分布很常见 - 每个处理器都有自己的全局网格块,尽可能多的网格单元是本地的。在MPI-IO的情况下,每个MPI任务将创建一个与其全局数组相对应的子数组,并使用该数据作为视图将其部分域读入/写入包含所有数据的文件。

Subarray lets you describe a single block/slice of a larger multidimensional array. If every MPI task has a single slice/block of a large global array, (or if you are communicating chunks of local arrays between tasks) then MPI_Type_create_subarray is the way to go; the syntax is very straightforward. For solving things like PDEs on regular meshes, this distribution is very common - each processor has it's own chunk of the global grid, with as many of its grid cells local as possible. In the case of MPI-IO, each MPI task would create a subarray corresponding to it's piece of the global array, and use that as it's view to read in / write out its part of the domain to the file containing all of the data.

MPI_Type_create_darray允许比单块每个更复杂的分布式阵列模式。对于分布式线性代数计算,逐行分配一些矩阵可能是有意义的 - 比方说,如果有5个mpi任务,任务0得到第0行,第5行,第10行......任务1得到行1,6, 11,依此类推。其他矩阵可能按列分布;或者您可以将它们分布在行,列或两者中。这些数据分布与命运多 HPF 中提供的数据相同,可让您定义数据 - 以这种方式并行排列数组,逐个数组。

MPI_Type_create_darray allows more complex distributed array patterns than single-chunk-each. For distributed linear algebra computations, it might make sense to distribute some matrices row-by-row -- say, if there's 5 mpi tasks, task 0 gets row 0, 5, 10... and task 1 gets row 1, 6, 11, and so on. Other matrices might get distributed by columns; or you could distribute them in blocks of rows, columns, or both. These data distributions are the same as were available in the ill-fated HPF, which let you define data-parallel layouts of arrays in this way, on an array-by-array basis.

我自己使用MPI_Type_create_darray的唯一方法,也是我唯一的方法曾经见过它,是创建一个大矩阵的MPI文件视图,将数据分发到块循环方式,以便人们可以读取文件然后使用 scalapack 在分布式矩阵上进行并行线性代数运算。

The only way I've ever used MPI_Type_create_darray myself, and indeed the only way I've ever seen it used, is to create an MPI file view of a large matrix to distribute the data in a block-cyclic fashion, so that one can read the file in and then use scalapack to do parallel linear algebra operations on the distributed matrix.

这篇关于mpi中darray和subarray有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆