最合适的MPI_Datatype的"块分解"? [英] Most appropriate MPI_Datatype for "block decomposition"?

查看:681
本文介绍了最合适的MPI_Datatype的"块分解"?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

乔纳森Dursi 的帮助和的<​​a href =htt​​p://stackoverflow.com/users / 196561 / osgx> osgx ,我现在已经做了的处理中排分解:

With the help from Jonathan Dursi and osgx, I've now done the "row decomposition" among the processes:

现在,我想试试块分解的方式(如下图所示):

Now, I'd like to try the "block decomposition" approach (pictured below):

怎么做呢?这一次, MPI_Datatype 将是必要的,对不对? 哪些数据类型是最合适的/容易使用?或可以将它振振有词没有一个数据类型来完成?

How should one go about it? This time, the MPI_Datatype will be necessary, right? Which datatype would be most appropriate/easy to use? Or can it plausibly be done without a datatype?

推荐答案

您总是可以凑合未经只创建一个缓冲区,复制缓冲区的基本类型的计数数据类型;这是概念上最简单的。另一方面,它慢,它实际上涉及code的更多的行。尽管如此,它可以方便的时候你要得到的东西的工作,然后你就可以实现沿边的数据类型-Y版本,并确保你得到同样的答案。

You can always make do without a datatype by just creating a buffer and copying the buffer as count of the underlying type; that's conceptually the simplest. On the other hand, it slower and it actually involves a lot more lines of code. Still, it can be handy when you're trying to get something to work, and then you can implement the datatype-y version along side that and make sure you're getting the same answers.

有关鬼细胞充盈,在我方向你不需要一个类型,因为它是类似于你一直在做什么;但你可以使用一个, MPI_Type_contiguous ,刚刚指定某种类型的计数(你可以在你的发送/ recv的反正这样做)。

For the ghost-cell filling, in the i direction you don't need a type, as it's similar to what you had been doing; but you can use one, MPI_Type_contiguous, which just specifies a count of some type (which you can do anyway in your send/recv).

有关在J方向鬼细胞填充,可能最简单的就是使用的 MPI_Type_Vector 。如果你发送的最右边的列(说)其中i = 0..N-1,J = 0..M-1要发送一个向量数量= N,块大小= 1,步幅= M阵列。也就是说,你发送1值的数块,分别由M值阵列中分离出来。

For ghost-cell filling in the j direction, probably easiest is to use MPI_Type_Vector. If you're sending the rightmost column of (say) an array with i=0..N-1, j=0..M-1 you want to send a vector with count=N, blocksize=1, stride=M. That is, you're sending count chunks of 1 value, each separated by M values in the array.

您也可以使用 MPI_Type_create_subarray 拔出要在阵列刚刚区域;这可能是在这种情况下,有点矫枉过正。

You can also use MPI_Type_create_subarray to pull out just the region of the array you want; that's probably a little overkill in this case.

现在,如果在你的previous问题,你希望能够在某些时候所有的子阵列收集到一个处理器,你可能会使用子阵,问题的一部分,在这里回答: <一href=\"http://stackoverflow.com/questions/5585630/mpi-type-create-subarray-and-mpi-gather\">MPI_Type_create_subarray和MPI_Gather 的。请注意,如果你的阵列块的大小不同,不过,事情开始变得有点tricker。

Now, if as in your previous question you want to be able at some point to gather all the sub-arrays onto one processor, you'll probably be using subarrays, and part of the question is answered here: MPI_Type_create_subarray and MPI_Gather . Note that if your array chunks are of different sizes, though, then things start getting a little tricker.

(事实上,你为什么做收集到一个处理器呢?这最终会成为一个可扩展的瓶颈。如果你这样做是对I / O,一旦你熟悉的数据类型,你可以使用MPI-IO这个..)

(Actually, why are you doing the gather onto one processor, anyway? That'll eventually be a scalability bottleneck. If you're doing it for I/O, once you're comfortable with data types, you can use MPI-IO for this..)

这篇关于最合适的MPI_Datatype的&QUOT;块分解&QUOT;?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆