共享内存或者MMAP - Linux下C / C ++ IPC [英] Shared Memory or mmap - Linux C/C++ IPC

查看:285
本文介绍了共享内存或者MMAP - Linux下C / C ++ IPC的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

上下文是进程间的通信,其中一个进程(服务器)必须发送固定大小的结构很多监听进程(客户),在同一台机器上运行。

The context is Inter-Process-Communication where one process("Server") has to send fixed-size structs to many listening processes("Clients") running on the same machine.

我很舒服的Socket编程这样做。
为了使服务器和客户机之间的通信速度更快,减少副本的数量,我想尝试使用共享内存(SHM)或mmaps。

I am very comfortable doing this in Socket Programming. To make the communication between the Server and the Clients faster and to reduce the number of copies, I want to try out using Shared Memory(shm) or mmaps.

该操作系统是RHEL 64位。

The OS is RHEL 64bit.

由于我是新手,请建议我应该使用。
我倒是AP preciate,如果有人可以点我的一本书或网上资源来学习一样。

Since I am a newbie, please suggest which should I use. I'd appreciate it if someone could point me to a book or online resource to learn the same.

感谢您的答案。我想补充一点,服务器(市场数据服务器)通常会接收组播数据,这将导致它被送每秒约20万到结构的客户,其中每个结构大约是100个字节。
难道的shm_open / mmap的实现跑赢大盘的插座只对大数据块或大体积小结构,以及?

Thanks for the answers. I wanted to add that the Server ( Market Data Server ) will typically be receiving multicast data, which will cause it to be "sending" about 200,000 structs per second to the "Clients", where each struct is roughly 100 Bytes. Does shm_open/mmap implementation outperform sockets only for large blocks of data or a large volume of small structs as well ?

推荐答案

我会使用 MMAP 的shm_open 到共享存储器映射到进程的虚拟地址空间。这是比较直接的和干净的:

I'd use mmap together with shm_open to map shared memory into the virtual address space of the processes. This is relatively direct and clean:


  • 您确定您的共享内存
    段具有某种象征
    名字,像/ myRegion

  • 的shm_open 打开文件
    在该区域的描述符

  • ftruncate 您放大段你需要的大小

  • MMAP 你把它映射到你
    地址空间

  • you identify your shared memory segment with some kind of symbolic name, something like "/myRegion"
  • with shm_open you open a file descriptor on that region
  • with ftruncate you enlarge the segment to the size you need
  • with mmap you map it into your address space

的shmat 和Co的接口有(至少在历史上),他们可能有该内存可以映射的最大数量的限制的缺点。

The shmat and Co interfaces have (at least historically) the disadvantage that they may have a restriction in the maximal amount of memory that you can map.

然后,所有的POSIX线程同步工具( pthread_mutex_t pthread_cond_t的 sem_t pthread_rwlock_t ,...)有初始化的接口,让您在进程共享环境中使用他们。所有现代的Linux发行版支持这一点。

Then, all the POSIX thread synchronization tools (pthread_mutex_t, pthread_cond_t, sem_t, pthread_rwlock_t, ...) have initialization interfaces that allow you to use them in a process shared context, too. All modern Linux distributions support this.

不管是不是通过套接字这是preferable?性能方面它可以使一个有点不同,因为你没有复制周围的事物。但主要的一点,我的猜测是,一旦你初始化你的领域,这是概念上有点简单。要访问一个项目你只需要采取锁的共享锁,读取数据,然后再解除锁定。

Whether or not this is preferable over sockets? Performance wise it could make a bit of a difference, since you don't have to copy things around. But the main point I guess would be that, once you have initialized your segment, this is conceptually a bit simpler. To access an item you'd just have to take a lock on a shared lock, read the data and then unlock the lock again.

由于@R建议,如果您有多个读者 pthread_rwlock_t 将可能是最好的锁结构来使用。

As @R suggests, if you have multiple readers pthread_rwlock_t would probably the best lock structure to use.

这篇关于共享内存或者MMAP - Linux下C / C ++ IPC的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆