发送包含无效*通过创建MPI数据类型掘进typedef结构。 [英] Sending typedef struct containing void* by creating MPI drived datatype.

查看:287
本文介绍了发送包含无效*通过创建MPI数据类型掘进typedef结构。的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我认识学习MPI规范是
一个MPI发送原始指通过指针指向的数据的存储位置(或发送缓冲区)发送
并采取数据在该位置中,然后作为一个消息发送到另一个进程通过。

虽然这是事实,一个给进程的虚拟地址将在另一个进程的内存地址无意义的;
它是确定通过发送指针所指向的数据,如空指针作为MPI将任何方式传递数据本身作为一个信息

例如以下正常工作:

  //发送方。
    INT X = 100;
    无效* SND;
    MPI_SEND(SND,4,MPI_BYTE,1,0,MPI_COMM_WORLD);    //接收方。
    无效* RCV;
    MPI_RECV(RCV,4,MPI_BYTE,0,0,MPI_COMM_WORLD);

但是当我在一个结构中添加的无效* SND 并尝试发送结构,这将不会成功。

我不明白为什么previous例如正常工作,但没有下文。

在这里,我已经定义了一个typedef结构,然后从它创建一个MPI_DataType。
随着在以下上面应该也成功了相同的解释,
遗憾的是它不工作。

这里是code:

 的#includempi.h
    #包括LT&;&stdio.h中GT;    INT主(整形变量,字符*的argv [])
    {
        INT等级,来源= 0,标签= 1,DEST = 1;
        INT bloackCount [2];        MPI_INIT(安培; ARGS,&安培; argv的);        typedef结构{
            void *的数据;
            INT标签;
        }数据;        数据MYDATA的;        MPI_Datatype structType,OLDTYPE [2];
        MPI_Status统计;        / *用于idetify每个块(阵列)的字节位移MPI_Aint类型* /
        MPI_Aint偏移[2],程度;
        MPI_Comm_rank(MPI_COMM_WORLD,&安培;等级);
        偏移[0] = 0;
        OLDTYPE [0] = MPI_BYTE;
            bloackCount [0] = 1;        MPI_Type_extent(MPI_INT,和放大器;限);        偏移[1] = 4 *程度; / *让说MPI_BYTE将包含ineteger:INT的尺寸* *程度/
        OLDTYPE [1] = MPI_INT;
        bloackCount [1] = 1;        MPI_Type_create_struct(2 bloackCount,偏移,OLDTYPE,&安培; structType);
        MPI_Type_commit(安培; structType);
        如果(排名== 0){
    INT X = 100;
    myData.data =安培; X;
    myData.tag = 99;
    MPI_SEND(安培; myData的,1,structType,DEST,标签,MPI_COMM_WORLD);
}
如果(等级== 1){
    MPI_RECV(安培; myData的,1,structType,来源,标签,MPI_COMM_WORLD,&安培;统计);
          //使用这项下面的printf()将打印正确的价值99
          // myData.tag
    INT X = *为(int *)myData.data;
    的printf(\\ n进程%D,收稿日期:%D,%d个\\ n \\ n,军衔,myData.tag,X);
    }
       MPI_Type_free(安培; structType);
       MPI_Finalize();
    }

错误信息运行code:
[貌似我试图在第二个进程访问无效的内存地址空间]

  [求助:04123 ***处理接收到的信号***
    [求助:04123]信号:分段故障(11)
    [求助:04123]信号code:地址没有被映射(1)
    [求助:04123]失败地址:0xbfe008bc
    [求助:04123] [0] [0xb778240c]
    [求助:04123] [1] GenericstructType(主+ 0x161)[0x8048935]
    [求助:04123] [2] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0xb750f4d3]
    [求助:04123] [3] GenericstructType()[0x8048741]
    [求助:04123] ***错误消息结束***

为什么它不工作我一定能请解释一下。
任何意见也将是AP preciated

感谢,


解决方案

  //发送方。
INT X = 100;
无效* SND;
MPI_SEND(SND,4,MPI_BYTE,1,0,MPI_COMM_WORLD);//接收方。
无效* RCV;
MPI_RECV(RCV,4,MPI_BYTE,0,0,MPI_COMM_WORLD);


  

我不明白为什么previous例如正常工作,但没有下文。


它的工作原理(当然, SND RCV 必须分配有意义的内存位置值),因为 MPI_SEND MPI_RECV 取数据位置和两个 SND RCV 是指针,即它们的值是这样的地址。例如, MPI_SEND 行不发送指针本身的价值,而是4个字节从 SND 的位置开始指向。同样是真正的对呼叫 MPI_RECV RCV 的使用。为了发送指针,而不是它指向的值的值,你将不得不使用:

  MPI_SEND(安培;四逆汤,的sizeof(void *的),MPI_BYTE,...);

这将发送的sizeof(无效*)字节,从那里存储指针的值的地址开始。这将使很少意义的一些超级特殊情况下,除非

为什么你的第二个例子不起作用? MPI是不是一个魔术师,它不能识别内存的那部分包含一个指向另一个内存块,并按照该指针。即,当构造的结构化数据类型,也没有办法来告诉MPI该结构的第一元件实际上是一个指针,并使其读出的数据,该指针指向。换句话说,你必须执行的明确的数据编组的 - 构建和包含存储区域的副本中间缓冲区,由数据。数据指出。此外,你的数据结构包含关于数据存储区的长度没有信息点。

请注意很重要的事。所有的数据类型MPI有一些所谓的一个类型的地图的。 A型图是元组的列表,其中每个元组,也称为的类型签名的,具有形式(basic_type,偏移),其中 basic_type 是一种原始的语言类型,例如: 字符 INT 双击等,以及偏置为偏移量,相对于该缓冲区的开始。 MPI独有的特色是,偏移量也可能是负值,这意味着参数 MPI_SEND (或 MPI_RECV 或任何其他的通信功能)可能实际上指向存储器区的中间,这将作为数据源。当发送数据时,MPI穿越类型映射,并采取类型的一个元素 basic_type 从相应的偏移,相对于提供的数据缓冲区地址。内置的MPI数据类型有偏移 0 的,例如只有一个入口的typemaps:

  MPI_INT  - > (中间体1,0)
MPI_FLOAT - > (浮动,0)
MPI_DOUBLE - > (双,0)

没有数据类型的MPI存在,这可以使它dererence一个指针,并采取值,它指向的,而不是指针值本身。

 偏移[0] = 0;
OLDTYPE [0] = MPI_BYTE;
blockCount [0] = 1;MPI_Type_extent(MPI_INT,和放大器;限);偏移[1] = 4 *程度;
OLDTYPE [1] = MPI_INT;
blockCount [1] = 1;MPI_Type_create_struct(2 blockCount,偏移,OLDTYPE,&安培; structType);

这code创建具有以下类型的地图数据类型MPI(假设 INT 4字节):

  {(字节,0),(INT,16)}

在作为类型参数 MPI_SEND 提供,它会指示MPI库从数据缓冲区的开始采取一个字节,然后取整数值,位于过去的数据缓冲区的开始16个字节。在总的消息将是5个字节长,虽然缓冲区域的跨度将是20个字节。

 偏移[0] = offsetof(数据,数据);
OLDTYPE [0] = MPI_CHAR;
blockCount [0] = sizeof的(无效*);偏移[1] = offsetof(数据,标签);
OLDTYPE [1] = MPI_INT;
blockCount [1] = 1;MPI_Type_create_struct(2 blockCount,偏移,OLDTYPE,&安培; structType);

这code,格雷格伊诺泽姆采夫的答案拍摄,创建具有下列类型映射(假设32位机与32位宽pointes的和零填充)数据类型:

  {(字符,0),(CHAR,1),(字符,2),(字符,3),(INT,4)}

(CHAR,X) typesigs的数量是假设等于的sizeof(无效*)(4 )。如果用作数据类型,这将需要4个字节从缓冲器的开头(即指针,地址的值,而不是实际的诠释它指向!),然后,将采取从后4个字节的整数开始,即标签中的结构字段的值。再一次,你将被发送指针的地址,而不是数据,该指针指向

MPI_CHAR MPI_BYTE 是没有类型转换应用于类型的数据<$ C $的区别C> MPI_BYTE 。在异构环境中运行的MPI codeS时,这仅仅是相关的。随着 MPI_CHAR 图书馆可以执行数据转换,例如从ASCII每个字符转换为EBCDIC字符集,反之亦然。使用 MPI_CHAR 在这种情况下是错误的,但在异构环境中发送指针更是错误的,所以不用担心;)

在这一切的光,如果我是你,我会认为suszterpatt已经提出的解决方案。


有关的明确的数据编组,有两种可能的情况:

方案1.每个数据项,指向数据。数据是恒定的大小。在这种情况下,您可以通过以下方式构建一个结构数据类型:

  typedef结构{
   INT标签;
   CHAR数据[];
} data_flat;//把标签开头
偏移[0] = offsetof(data_flat,标签);
OLDTYPE [0] = MPI_INT;
blockCount [0] = 1;偏移[1] = offsetof(data_flat,数据);
OLDTYPE [1] = MPI_BYTE;
blockCount [1] =数据的大小;MPI_Type_create_struct(2 blockCount,偏移,OLDTYPE,&安培; structType);
MPI_Type_commit(安培; structType);

然后使用它是这样的:

  // --- ---发件人//创建一个临时缓冲区来保存数据
为size_t TOTAL_SIZE = offsetof(data_flat,数据)+数据的大小;
data_flat * TEMP =的malloc(TOTAL_SIZE);//复制数据结构内容到临时平面结构
TEMP-&gt;标签= data.tag;
的memcpy(TEMP-&G​​T;数据,数据。数据,数据的大小);//发送临时搭建
MPI_SEND(温度,1,structType,...);//自由临时搭建
免费(TEMP);

您可能也不会释放临时存储而是由presumption它们都指向数据重用它为好数据结构的其他实例(自相同大小的)。接收机将是:

  // --- ---接收器//创建一个临时缓冲区来保存数据
为size_t TOTAL_SIZE = offsetof(data_flat,数据)+数据的大小;
data_flat * TEMP =的malloc(TOTAL_SIZE);//接收到临时结构
MPI_RECV(温度,1,structType,...);//临时平面struture复制到数据结构
data.tag = TEMP-&gt;标签;
数据。数据= TEMP-GT&;数据;
//不要释放临时建筑,因为它包含实际数据

方案2的每个数据项可能是不同的尺寸。这一个是更多地参与和努力在便携的方式做。如果速度是不是你最关心的问题,那么你可能会在最大的可移植性两个不同的消息中发送的数据。 MPI保证订单是用相同的信封发送的消息preserved (源,目标,标签,通讯)


您也可以实现什么suszterpatt按以下方式提议(定自己的代码装入的允许范围):

  // ---发送结构---
MPI_SEND(数据。数据,资料,MPI_BYTE,DEST,data.tag,MPI_COMM_WORLD的大小);// ---接收的结构---
MPI_Status状态;
MPI_Aint msg_size;
//皮克一条消息,分配足够的缓冲大
MPI_PROBE(源,MPI_ANY_TAG,&安培;状态);
MPI_Get_count(安培;状态,MPI_BYTE,&安培; msg_size);
uint8_t有*缓冲区=的malloc(msg_size);
//收到消息
MPI_RECV(缓冲,(INT)msg_size,MPI_BYTE,来源,status.MPI_TAG,
         MPI_COMM_WORLD,MPI_STATUS_IGNORE);
//在数据结构中填
data.tag = status.MPI_TAG;
数据。数据=缓冲;

what I understand studying MPI specification is that an MPI send primitive refer to a memory location (or a send buffer) pointed by the data to be sent and take the data in that location which then passed as a message to the another Process.

Though it is true that virtual address of a give process will be meaningless in another process memory address; It is ok to send data pointed by pointer such as void pointer as MPI will any way pass the data itself as a message

For example the following works correctly:

    // Sender Side.
    int x = 100;
    void* snd;
    MPI_Send(snd,4,MPI_BYTE,1,0,MPI_COMM_WORLD);   

    // Receiver Side.
    void* rcv;
    MPI_Recv(rcv, 4,MPI_BYTE,0,0,MPI_COMM_WORLD); 

but when I add void* snd in a struct and try to send the struct this will no succeed.

I don't understand why the previous example work correctly but not the following.

Here, I have defined a typedef struct and then create an MPI_DataType from it. With the same explanation of the above the following should also have succeed, unfortunately it is not working.

here is the code:

    #include "mpi.h"
    #include<stdio.h>

    int main(int args, char *argv[])
    {
        int rank, source =0, tag=1, dest=1;
        int bloackCount[2];

        MPI_Init(&args, &argv);

        typedef struct {
            void* data;
            int tag; 
        } data;

        data myData;    

        MPI_Datatype structType, oldType[2];
        MPI_Status stat;

        /* MPI_Aint type used to idetify byte displacement of each block (array)*/      
        MPI_Aint offsets[2], extent;
        MPI_Comm_rank(MPI_COMM_WORLD, &rank);


        offsets[0] = 0;
        oldType[0] = MPI_BYTE;
            bloackCount[0] = 1;

        MPI_Type_extent(MPI_INT, &extent);

        offsets[1] = 4 * extent;  /*let say the MPI_BYTE will contain ineteger :         size of int * extent */
        oldType[1] = MPI_INT;
        bloackCount[1] = 1;

        MPI_Type_create_struct(2, bloackCount,offsets,oldType, &structType);
        MPI_Type_commit(&structType);


        if(rank == 0){
    int x = 100;
    myData.data = &x;
    myData.tag = 99;
    MPI_Send(&myData,1,structType, dest, tag, MPI_COMM_WORLD);
}
if(rank == 1 ){ 
    MPI_Recv(&myData, 1, structType, source, tag, MPI_COMM_WORLD, &stat);
          // with out this the following printf() will properly print the value 99 for 
          // myData.tag
    int x = *(int *) myData.data;
    printf(" \n Process %d, Received : %d , %d \n\n", rank , myData.tag, x); 
    }   
       MPI_Type_free(&structType);             
       MPI_Finalize();
    }

Error message running the code: [Looks like I am trying to access an invalid memory address space in the second process]

    [ubuntu:04123] *** Process received signal ***
    [ubuntu:04123] Signal: Segmentation fault (11)
    [ubuntu:04123] Signal code: Address not mapped (1)
    [ubuntu:04123] Failing at address: 0xbfe008bc
    [ubuntu:04123] [ 0] [0xb778240c]
    [ubuntu:04123] [ 1] GenericstructType(main+0x161) [0x8048935]
    [ubuntu:04123] [ 2] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)         [0xb750f4d3]
    [ubuntu:04123] [ 3] GenericstructType() [0x8048741]
    [ubuntu:04123] *** End of error message ***

Can some please explain to me why it is not working. any advice will also be appreciated

thanks,

解决方案

// Sender Side.
int x = 100;
void* snd;
MPI_Send(snd,4,MPI_BYTE,1,0,MPI_COMM_WORLD);

// Receiver Side.
void* rcv;
MPI_Recv(rcv, 4,MPI_BYTE,0,0,MPI_COMM_WORLD);

I don't understand why the previous example work correctly but not the following.

It works (of course, snd and rcv have to be assigned meaningful memory locations as values), because MPI_Send and MPI_Recv take the address of the data location and both snd and rcv are pointers, i.e. their values are such addresses. For example, the MPI_Send line is not sending the value of the pointer itself but rather 4 bytes starting from the location that snd is pointing to. The same is true about the call to MPI_Recv and the usage of rcv. In order to send the value of the pointer rather than the value it is pointing to, you would have to use:

MPI_Send(&snd, sizeof(void *), MPI_BYTE, ...);

This would send sizeof(void *) bytes, starting from the address where the value of the pointer is stored. This would make very little sense unless for some super special cases.

Why your second example doesn't work? MPI is not a magician and it cannot recognise that part of the memory contains a pointer to another memory block and follow that pointer. That is, when you construct a structured datatype, there is no way to tell MPI that the first element of the structure is actually a pointer and make it read the data that this pointer points to. In other words, you must perform explicit data marshalling - construct and intermediate buffer that contains a copy of the memory region, pointed by data.data. Besides, your data structure contains no information on the length of the memory region that data points to.

Please note something very important. All MPI datatypes have something called a type map. A type map is a list of tuples, where each tuple, also called type signature, has the form (basic_type, offset) where basic_type is a primitive language type, e.g. char, int, double, etc. and offset is an offset, relative to the beginning of the buffer. One peculiar feature of MPI is that offsets could also be negative and this means that the argument to MPI_Send (or to MPI_Recv, or to any other communication function) might actually point to the middle of the memory area, that would serve as data source. When sending data, MPI traverses the type map and takes one element of type basic_type from the corresponding offset, relative to the supplied data buffer address. The built-in MPI datatypes have typemaps of only one entry with an offset of 0, e.g.:

MPI_INT      -> (int, 0)
MPI_FLOAT    -> (float, 0)
MPI_DOUBLE   -> (double, 0)

NO datatype exists in MPI, that can make it dererence a pointer and take the value that it points to instead of the pointer value itself.

offsets[0] = 0;
oldType[0] = MPI_BYTE;
blockCount[0] = 1;

MPI_Type_extent(MPI_INT, &extent);

offsets[1] = 4 * extent;
oldType[1] = MPI_INT;
blockCount[1] = 1;

MPI_Type_create_struct(2, blockCount, offsets, oldType, &structType);

This code creates an MPI datatype that has the following type map (assuming int is 4 bytes):

{(byte, 0), (int, 16)}

When supplied as the type argument to MPI_Send, it would instruct the MPI library to take one byte from the beginning of the data buffer and then to take the integer value, located at 16 bytes past the beginning of the data buffer. In total the message would be 5 bytes long, although the span of the buffer area would be 20 bytes.

offsets[0] = offsetof(data, data);
oldType[0] = MPI_CHAR;
blockCount[0] = sizeof(void *);

offsets[1] = offsetof(data, tag);
oldType[1] = MPI_INT;
blockCount[1] = 1;

MPI_Type_create_struct(2, blockCount, offsets, oldType, &structType);

This code, taken from the answer of Greg Inozemtsev, creates a datatype with the following type map (assuming 32-bit machine with 32-bit wide pointes and zero padding):

{(char, 0), (char, 1), (char, 2), (char, 3), (int, 4)}

The number of (char, x) typesigs is equal to sizeof(void *) (4 by assumption). If used as a datatype, this would take 4 bytes from the beginning of the buffer (i.e. the value of the pointer, the address, not the actual int it is pointing to!) and then it would take an integer from 4 bytes after the beginning, i.e. the value of the tag field in the structure. Once again, you would be sending the address of the pointer and not the data that this pointer points to.

The difference betwen MPI_CHAR and MPI_BYTE is that no type conversion is applied to data of type MPI_BYTE. This is only relevant when running MPI codes in heterogenous environments. With MPI_CHAR the library might perform data conversion, e.g. convert each character from ASCII to EBCDIC character sets and vice versa. Using MPI_CHAR in this case is erroneous, but sending pointers in a heterogeneous environment is even more erroneous, so no worries ;)

In the light of all this, if I were you, I would consider the solution that suszterpatt has proposed.


For the explicit data marshalling, there are two possible scenarios:

Scenario 1. Each data item, pointed to by data.data is of constant size. In this case you can construct a structure datatype in the following way:

typedef struct {
   int tag;
   char data[];
} data_flat;

// Put the tag at the beginning
offsets[0] = offsetof(data_flat, tag);
oldType[0] = MPI_INT;
blockCount[0] = 1;

offsets[1] = offsetof(data_flat, data);
oldType[1] = MPI_BYTE;
blockCount[1] = size of the data;

MPI_Type_create_struct(2, blockCount, offsets, oldType, &structType);
MPI_Type_commit(&structType);

Then use it like this:

// --- Sender ---

// Make a temporary buffer to hold the data
size_t total_size = offsetof(data_flat, data) + size of the data;
data_flat *temp = malloc(total_size);

// Copy data structure content into the temporary flat structure
temp->tag = data.tag;
memcpy(temp->data, data.data, size of the data);

// Send the temporary structure
MPI_Send(temp, 1, structType, ...);

// Free the temporary structure
free(temp);

You might also not free the temporary storage but rather reuse it for other instances of the data structure as well (since by presumption they are all pointing to data of the same size). The receiver would be:

// --- Receiver ---

// Make a temporary buffer to hold the data
size_t total_size = offsetof(data_flat, data) + size of the data;
data_flat *temp = malloc(total_size);

// Receive into the temporary structure
MPI_Recv(temp, 1, structType, ...);

// Copy the temporary flat struture into a data structure
data.tag = temp->tag;
data.data = temp->data;
// Do not free the temporary structure as it contains the actual data

Scenario 2. Each data item might be of different size. This one is much more involved and hard to do in a portable way. If speed is not your greatest concern, then you might send the data in two distinct messages for maximum portability. MPI guarantees that order is preserved for messages sent with the same envelope (source, destination, tag, communicator).


You could also implement what suszterpatt proposed in the following way (given that your tags fit into the allowed range):

// --- Send a structure ---
MPI_Send(data.data, size of data, MPI_BYTE, dest, data.tag, MPI_COMM_WORLD);

// --- Receive a structure ---
MPI_Status status;
MPI_Aint msg_size;
// Peek for a message, allocate big enough buffer
MPI_Probe(source, MPI_ANY_TAG, &status);
MPI_Get_count(&status, MPI_BYTE, &msg_size);
uint8_t *buffer = malloc(msg_size);
// Receive the message
MPI_Recv(buffer, (int)msg_size, MPI_BYTE, source, status.MPI_TAG,
         MPI_COMM_WORLD, MPI_STATUS_IGNORE);
// Fill in a data structure
data.tag = status.MPI_TAG;
data.data = buffer;

这篇关于发送包含无效*通过创建MPI数据类型掘进typedef结构。的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆