MPI BCast(广播)结构的std :: vector [英] MPI BCast (broadcast) of a std::vector of structs

查看:409
本文介绍了MPI BCast(广播)结构的std :: vector的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个关于通过MPI传递结构体std :: vector的问题。



首先,细节。我使用OpenCCI 1.4.3(MPI-2兼容)与gcc。
注意,我不能使用boost MPI或OOMPI - 我必须使用这个版本。



我有一个结构体聚集一些data:

  struct Delta {
Delta():dX(0.0),dY(0.0),dZ ){};
Delta(double dx,double dy,double dz):
dX(dx),dY(dy),dZ(dz){};
Delta(const Delta& rhs):
dX(rhs.dX),dY(rhs.dY),dZ(rhs.dZ){};

double dX;
double dY;
double dZ;
};

typedef std :: vector< Delta> DeltaLine;

,我有一个DeltaLine,我想通过MPI广播到所有节点。



我可以安全和便携地进行以下操作吗?
这在我的测试用例中。我只是想确保它的合法性,并在不同的平台,根据C ++和MPI标准。ksher。



谢谢!
Madeleine。

  //为Delta类创建MPI结构
const int nItems = 3;
int blocklengths [nItems] = {1,1,1};
MPI_Datatype types [nItems] = {MPI_DOUBLE,MPI_DOUBLE,MPI_DOUBLE};
MPI_Datatype MPI_DeltaType;
MPI_Aint offsets [nItems];

offsets [0] = offsetof(Delta,dX);
offsets [1] = offsetof(Delta,dY);
offsets [2] = offsetof(Delta,dZ);

MPI_Type_create_struct(nItems,blocklengths,offsets,types,& MPI_DeltaType);
MPI_Type_commit(& MPI_DeltaType);

//这是要填充的向量及其大小
DeltaLine deltaLine;
unsigned deltaLineSize;

//如果这是主处理,获取DeltaLine及其大小
if(amMaster()){
deltaLine = getMasterDeltaLine();
deltaLineSize = deltaLine.size();
}

//发送正确的大小
MPI_Bcast(& deltaLineSize,1,MPI_UNSIGNED,COM​​M_PROC,MPI_COMM_WORLD);

//调整delta行向量,并广播其内容
deltaLine.reserve(deltaLineSize);
MPI_Bcast(& deltaLine.front(),deltaLineSize,MPI_DeltaType,COMM_PROC,MPI_COMM_WORLD);

//释放类型
MPI_Type_free(& MPI_DeltaType);


解决方案

C ++标准保证 std :: vector 在内存中连续存储, std :: vector :: reserve()在调用时需要,因此您的解决方案从内存管理的角度是完全有效的。尽管如Solkar所说, std :: vector :: reserve()只保留内存空间,但向量对象不知道有数据直接写入该内存,因此保持先前的元素计数(对于新创建的向量为零)。这可以通过在第二次广播操作之前调用 std :: vector :: resize()来修复。



注释,但适用于所有情况下构造的MPI数据类型用于发送数组 - 您应该注意连续数组元素之间的可能填充。换句话说,由于在 struct 结尾处可能填充,因此可以保持以下内容:

 (char *)& deltaLine [1]  - (char *)& deltaLine [0]!= mpi_extentof(MPI_DeltaType)
pre>

其中 mpi_extentof MPI_Type_get_extent()返回的MPI数据类型的范围, 。因为MPI使用extent来确定每个数组元素的起始位置,所以建议为任何用于发送多个元素的结构类型显式设置它。对于MPI-1,这通常通过添加 MPI_UB 假型的一个特殊结构元素来完成,但是在现代MPI代码中(或者在MPI-2中) code> MPI_Type_create_resized 为此目的:

  //为Delta创建MPI结构class 
const int nItems = 3;
int blocklengths [nItems] = {1,1,1};
MPI_Datatype types [nItems] = {MPI_DOUBLE,MPI_DOUBLE,MPI_DOUBLE};
MPI_Datatype MPI_DeltaType_proto,MPI_DeltaType;
MPI_Aint offsets [nItems];

offsets [0] = offsetof(Delta,dX);
offsets [1] = offsetof(Delta,dY);
offsets [2] = offsetof(Delta,dZ);

MPI_Type_create_struct(nItems,blocklengths,offsets,types,& MPI_DeltaType_proto);

//调整类型以使其长度匹配实际结构长度

//获取构造的类型下限和范围
MPI_Aint lb,extent;
MPI_Type_get_extent(MPI_DeltaType_proto,& lb,& extent);

//获取向量元素之间的实际距离
//(这可能不是最好的方法 - 如果是这样,请替换为更好的方法)
extent = (char *)& deltaLine [1] - (char *)& deltaLine [0];

//创建其范围与实际距离相匹配的调整大小的类型
MPI_Type_create_resized(MPI_DeltaType_proto,lb,extent,& MPI_DeltaType);
MPI_Type_commit(& MPI_DeltaType);

在您的情况下,只有 double 在结构中并且没有预期的填充,因此这样做所有没有必要。但是请记住,你以后的工作与MPI。


I've got a question regarding passing of a std::vector of structs via MPI.

First off, details. I'm using OpenMPI 1.4.3 (MPI-2 compliant) with gcc. Note that I can't use boost MPI or OOMPI -- I'm bound to using this version.

I've got a struct to aggregate some data:

  struct Delta {
    Delta() : dX(0.0), dY(0.0), dZ(0.0) {};
    Delta(double dx, double dy, double dz) :
      dX(dx), dY(dy), dZ(dz) {};
    Delta(const Delta& rhs) :
      dX(rhs.dX), dY(rhs.dY), dZ(rhs.dZ) {};

    double dX;
    double dY;
    double dZ;
  };

  typedef std::vector<Delta> DeltaLine;

and I have a DeltaLine that I'd like to broadcast, via MPI, to all the nodes.

Can I do the following safely and portably? This works for me in my test case. I just want to make sure it's legal and kosher across different platforms and according to the C++ and MPI standards.

Thanks! Madeleine.

  //Create an MPI struct for the Delta class
  const int    nItems=3;
  int          blocklengths[nItems] = {1, 1, 1};
  MPI_Datatype types[nItems] = {MPI_DOUBLE, MPI_DOUBLE, MPI_DOUBLE};
  MPI_Datatype MPI_DeltaType;
  MPI_Aint     offsets[nItems];

  offsets[0] = offsetof(Delta, dX);
  offsets[1] = offsetof(Delta, dY);
  offsets[2] = offsetof(Delta, dZ);

  MPI_Type_create_struct(nItems, blocklengths, offsets, types, &MPI_DeltaType);
  MPI_Type_commit(&MPI_DeltaType);

  //This is the vector to be filled, and its size
  DeltaLine deltaLine;
  unsigned deltaLineSize;

  //If this is the master proc, get the DeltaLine and its size
  if(amMaster()) {
    deltaLine = getMasterDeltaLine();
    deltaLineSize = deltaLine.size();
  }

  //Send out the correct size
  MPI_Bcast(&deltaLineSize, 1, MPI_UNSIGNED, COMM_PROC, MPI_COMM_WORLD);

  //Size the delta line vector, and broadcast its contents
  deltaLine.reserve(deltaLineSize);
  MPI_Bcast(&deltaLine.front(), deltaLineSize, MPI_DeltaType, COMM_PROC, MPI_COMM_WORLD);

  //Free up the type
  MPI_Type_free(&MPI_DeltaType);

解决方案

The C++ standard guarantees that the elements of std::vector are stored contiguously in memory and that std::vector::reserve() (re-)allocates memory if necessary at the time of call, therefore your solution is perfectly valid from memory management point of view. Though, as Solkar noted, std::vector::reserve() only reserves memory space but the vector object is not aware that there is data being directly written in that memory and therefore keeps the previous element count (zero for freshly created vectors). This can be fixed by calling std::vector::resize() before the second broadcast operation.

One comment though that applies to all cases when constructed MPI datatypes are used to send arrays - you should take care of possible padding between the consecutive array elements. In other words, it is possible for the following to hold because of possible padding at the end of the struct:

(char*)&deltaLine[1] - (char*)&deltaLine[0] != mpi_extentof(MPI_DeltaType)

where mpi_extentof is the extent of the MPI datatype as returned by MPI_Type_get_extent(). Because MPI uses the extent to determine where each array element starts, it is advisable to explicitly set it for any structure type that is used to send more than one element. With MPI-1 this is typically done by adding one special structure element of the MPI_UB pseudotype, but in modern MPI code (or in MPI-2 in general) one should use MPI_Type_create_resized for that purpose:

//Create an MPI struct for the Delta class
const int    nItems=3;
int          blocklengths[nItems] = {1, 1, 1};
MPI_Datatype types[nItems] = {MPI_DOUBLE, MPI_DOUBLE, MPI_DOUBLE};
MPI_Datatype MPI_DeltaType_proto, MPI_DeltaType;
MPI_Aint     offsets[nItems];

offsets[0] = offsetof(Delta, dX);
offsets[1] = offsetof(Delta, dY);
offsets[2] = offsetof(Delta, dZ);

MPI_Type_create_struct(nItems, blocklengths, offsets, types, &MPI_DeltaType_proto);

// Resize the type so that its length matches the actual structure length

// Get the constructed type lower bound and extent
MPI_Aint lb, extent;
MPI_Type_get_extent(MPI_DeltaType_proto, &lb, &extent);

// Get the actual distance between to vector elements
// (this might not be the best way to do it - if so, substitute a better one)
extent = (char*)&deltaLine[1] - (char*)&deltaLine[0];

// Create a resized type whose extent matches the actual distance
MPI_Type_create_resized(MPI_DeltaType_proto, lb, extent, &MPI_DeltaType);
MPI_Type_commit(&MPI_DeltaType);

In your case there are only double elements in the structure and no padding is expected, therefore doing this all is not necessary. But keep it in mind for your future work with MPI.

这篇关于MPI BCast(广播)结构的std :: vector的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆