IPC共享内存和线程内存之间的性能差异 [英] Performance difference between IPC shared memory and threads memory

查看:222
本文介绍了IPC共享内存和线程内存之间的性能差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我经常听到与访问线程之间的进程内存相比,访问进程之间的共享内存段不会降低性能.换句话说,多线程应用程序不会比使用共享内存的一组进程快(不包括锁定或其他同步问题).

I hear frequently that accessing a shared memory segment between processes has no performance penalty compared to accessing process memory between threads. In other words, a multi-threaded application will not be faster than a set of processes using shared memory (excluding locking or other synchronization issues).

但我有疑问:

1)shmat()将本地进程虚拟内存映射到共享段.必须为每个共享内存地址执行此转换,并且转换可能会花费大量成本.在多线程应用程序中,不需要额外的转换:所有VM地址都转换为物理地址,就像在不访问共享内存的常规过程中一样.

1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, just like in a regular process that does not access shared memory.

2)共享内存段必须由内核以某种方式维护.例如,关闭与shm关联的所有进程时,shm段仍处于启动状态,并且最终可以由新启动的进程重新访问.在shm段上可能有一些与内核操作有关的开销.

2) The shared memory segment must be maintained somehow by the kernel. For example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There could be some overhead related to kernel operations on the shm segment.

多进程共享内存系统和多线程应用程序一样快吗?

Is a multi-process shared memory system as fast as a multi-threaded application?

推荐答案

1)shmat()将本地进程虚拟内存映射到共享 部分.必须为每个共享内存执行此转换 地址,并且相对于数量而言,可能代表一笔可观的成本 shm访问.在多线程应用程序中,没有多余的东西 需要转换:所有虚拟机地址都转换为物理地址 地址,例如在不访问共享内存的常规过程中.

1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost, relative to the number of shm accesses. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, as in a regular process that does not access shared memory.

与设置常规页面的常规内存访问相比,除了建立共享页面的初始成本外,没有任何开销-在调用shmat()的过程中填充页面表-在大多数类型的Linux中,页面大小为1页(4或8字节) )每4KB共享内存.

There is no overhead compared to regular memory access aside from the initial cost to set up shared pages - populating the page-table in the process that calls shmat() - in most flavours of Linux that's 1 page (4 or 8 bytes) per 4KB of shared memory.

(对于所有相关比较而言)页面是共享分配的还是在同一过程中的成本都是相同的.

It's (to all relevant comparison) the same cost whether the pages are allocated shared or within the same process.

2)共享内存段必须由内核以某种方式维护. 我不知道表演中某种程度上"的含义,但是 例如,当删除所有与shm关联的进程时, shm段仍在运行,并且最终可以被新访问 开始的过程.至少必须有一定程度的开销 与内核在生命周期中需要检查的事情有关 shm段.

2) The shared memory segment must be maintained somehow by the kernel. I do not know what that 'somehow' means in terms of performances, but for example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There must be at least some degree of overhead related to the things the kernel needs to check during the lifetime of the shm segment.

无论是否共享,内存的每个页面都有一个附加的结构页面",其中包含有关该页面的一些数据.其中一项是参考计数.将页面分配给进程时(无论是通过"shmat"还是其他机制),引用计数都会增加.通过某种方式释放它时,引用计数将减少.如果减少的计数为零,则实际上释放了页面-否则此后再也没有发生".

Whether shared or not, each page of memory has a "struct page" attached to it, with some data about the page. One of the items is a reference count. When a page is given out to a process [whether it is through "shmat" or some other mechanism], the reference count is incremented. When it is freed through some means, the reference count is decremented. If the decremented count is zero, the page is actually freed - otherwise "nothing more happens to it".

与分配的任何其他内存相比,开销基本上为零.相同的机制无论如何也用于页面的其他目的-例如,如果您有一个页面也被内核使用-并且您的进程死了,内核需要知道不释放该页面,直到它被内核释放为止.以及用户流程.

The overhead is basically zero, compared to any other memory allocated. The same mechanism is used for other purposes for pages anyways - say for example you have a page that is also used by the kernel - and your process dies, the kernel needs to know not to free that page until it's been released by the kernel as well as the user-process.

创建叉子"时也会发生同样的事情.派生一个进程时,父进程的整个页表实际上都将复制到子进程中,并且所有页都变为只读状态.每当发生写操作时,内核都会发生错误,从而导致该页面被复制-因此该页面现在有两个副本,执行写操作的进程可以修改它的页面,而不会影响其他进程.一旦子进程(或父进程)死亡,则显然这两个进程仍然拥有所有页面(例如,从未写入的代码空间,以及可能从未接触过的一堆公共数据,等等)显然无法释放,直到两个进程都死"为止.再一次,引用计数的页面在这里很有用,因为我们只对每个页面上的引用计数进行递减,并且当引用计数为零时(即,当使用该页面的所有进程都释放了该页面时)实际上返回为有用的页面".

The same thing happens when a "fork" is created. When a process is forked, the entire page-table of the parent process is essentially copied into the child process, and all pages made read-only. Whenever a write happens, a fault is taken by the kernel, which leads to that page being copied - so there are now two copies of that page, and the process doing the writing can modify it's page, without affecting the other process. Once the child (or parent) process dies, of course all pages still owned by BOTH processes [such as the code-space that never gets written, and probably a bunch of common data that never got touched, etc] obviously can't be freed until BOTH processes are "dead". So again, the reference counted pages come in useful here, since we only count down the ref-count on each page, and when the ref-count is zero - that is, when all processes using that page has freed it - the page is actually returned back as a "useful page".

共享库确实发生了同样的事情.如果一个进程使用共享库,则该进程结束时将释放它.但是,如果两个,三个或100个进程使用同一个共享库,则代码显然将必须保留在内存中,直到不再需要该页面为止.

Exactly the same thing happens with shared libraries. If one process uses a shared library, it will be freed when that process ends. But if two, three or 100 processes use the same shared library, the code obviously will have to stay in memory until the page is no longer needed.

因此,基本上,整个内核中的所有页面都已被引用计数.几乎没有开销.

So, basically, all pages in the whole kernel are already reference counted. There is very little overhead.

这篇关于IPC共享内存和线程内存之间的性能差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆