动态无锁内存分配器 [英] Dynamic Lock-free memory allocators

查看:166
本文介绍了动态无锁内存分配器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

编写满足无锁进度保证的算法或数据结构的困难之一是动态内存分配:不能保证以可移植的方式调用mallocnew之类的东西都是无锁的.但是,存在mallocnew的许多无锁实现,并且还有许多可用于实现无锁算法/数据结构的无锁内存分配器.

但是,除非您将数据结构或算法专门限制为某些预分配的静态内存池,否则我仍然不知道这如何才能完全满足无锁进度保证.但是,如果您需要动态内存分配,那么我不明白从长远来看,任何所谓的无锁内存分配器如何真正做到无锁.问题是,无论您的无锁mallocnew多么令人惊奇,最终您可能会耗尽内存,这时您必须回头向操作系统请求更多的内存.这意味着最终您必须调用brk()mmap()或某些类似的低级等效项,才能实际访问更多的内存.并且完全不能保证以无锁方式实现所有这些低级调用.

根本无法解决这个问题,(除非您使用的MS-DOS之类的古老操作系统不提供内存保护,或者您编写了自己的完全无锁的操作系统-两种情况都不可行或那么,任何动态内存分配器如何真正做到无锁?

解决方案

您已经发现自己,基本的OS分配器很可能不是无锁的,因为它必须处理多个进程以及各种有趣的东西,很难不引入某种锁.

但是,在某些情况下,无锁内存分配"是指无锁内存分配".并非意味着从不锁定",而是统计上很少锁定,以至于它实际上并不重要".除了最严格的实时系统之外,这对其他任何方法都适用.如果您对锁的争用不高,那么锁定或不锁定就没有关系-解除锁定的目的实际上并不是锁定本身的开销,而是锁定成为瓶颈的难易程度系统中的每个线程或进程都必须经过该位置才能执行任何有用的操作,并且这样做必须在队列中等待[它也可能不是真正的队列,也可能是谁先唤醒" ;或其他机制来决定谁在当前呼叫者之后出现.]

有几种不同的解决方案:

  • 如果您有一个有限大小的内存池,则可以在启动软件时立即要求操作系统提供所有内存.从操作系统分拆出内存后,它可以用作无锁池.明显的缺点是它限制了可以分配多少内存.然后,您要么必须停止分配(一起使应用程序全部失败,要么使该特定操作失败).

    当然,在诸如Linux或Windows的系统中,仍然不能保证在无锁的情况下内存分配意味着即时访问已分配的内存",因为系统可以并且将在没有内存的情况下分配内存.实际的物理内存支持它,并且只有在实际使用内存后,才会为其分配物理内存页面.这可能既涉及锁定,也涉及例如磁盘I/O,以便将其他页面分页到交换中.

  • 对于这样严格的实时系统,单个系统调用中可能争用锁的时间太多",解决方案当然是使用专用的OS,该OS具有锁-OS内的-free分配器(或至少一个具有可接受的已知实时行为的分配器-它最多可锁定X个microsecons [X可以小于1.0]).实时系统通常具有一个内存池和固定大小的存储桶,用于回收旧分配,这可以无锁方式进行-存储桶是一个链表,因此您可以使用原子比较和从该列表中插入/删除.交换操作[可能与重试一起使用,因此尽管从技术上讲它是无锁的,但在竞争情况下它的等待时间并非为零].

  • 另一可行的解​​决方案是具有每个线程池".如果您在线程之间传递数据,这可能会变得有些复杂,但是如果您接受释放以供重用的内存可能会出现在不同的线程中",则可能会变得有些复杂. (这当然会导致类似的问题:现在所有内存都位于一个线程中,该线程从许多其他线程中收集并释放信息,而所有其他线程都用尽了内存")

One of the difficulties in writing algorithms or data structures that satisfy lock-free progress guarantees is dynamic memory allocation: calling something like malloc or new isn't guaranteed to be lock-free in a portable manner. However, many lock-free implementations of malloc or new exist, and there are also a variety of lock-free memory allocators that can be used to implement lock-free algorithms/data-structures.

However, I still don't understand how this can actually completely satisfy lock-free progress guarantees, unless you specifically limit your data-structure or algorithm to some pre-allocated static memory pool. But if you need dynamic memory allocation, I don't understand how any alleged lock-free memory allocator can ever be truly lock-free in the long-run. The problem is that no matter how amazing your lock-free malloc or new might be, ultimately you may run out of memory, at which point you have to fall back on asking the operating system for more memory. That means that ultimately you have to call brk() or mmap() or some such low-level equivalent to actually get access to more memory. And there is simply no guarantee that any of these low-level calls are implemented in a lock-free manner.

There's simply no way around this, (unless you're using an ancient OS like MS-DOS that doesn't provide memory protection, or you write your own completely lock-free operating system - two scenarios that are not practical or likely.) So, how can any dynamic memory allocator truly be lock-free?

解决方案

As you have found yourself, the fundamental OS allocator is most likely not lock-free, because it has to deal with multiple processes and all manner of interesting stuff that makes it really hard to not introduce some sort of lock.

For some cases, however, the "lock free memory allocation" doesn't mean "never locks", but "statistically locks so rarely that it doesn't really matter". Which is fine for anything but the most strict real-time systems. If you don't have high contention on your lock, then lock or no lock doesn't really matter - the purpose of lock-free is not really the overhead of the lock itself, but the ease with which it becomes a bottle-neck where every thread or process in the system has to pass through this one place to do anything useful, and as it does so, it has to wait in queue [it may not be a true queue either, it may be "whoever wakes first" or some other mechanism that decides who comes out next, after the current caller].

There are a few different options to solve this:

  • If you have a memory pool with a finite size, you can ask the OS for all that memory at once, when the software is being started. And after the memory has been chunked out from the OS, it can be used as a lock-free pool. The obvious drawback is that it has a limit to how much memory can be allocated. You then either have to stop allocating (fail the application alltogether, or fail that particular operation).

    Of course, in a system like Linux or Windows, there is still no guarantee that memory allocation, in a lock-free scenario, means "instant access to the allocated memory", since the system can and will allocate memory without actual physical memory backing to it, and only once the memory is actually being used, the physical memory page is assigned to it. This may both involve locks and for example disk-I/O to page out some other page to the swap.

  • For such strict real-time systems that the time of a single system call that may contend for a lock is "too much", the solution is of course to use a dedicated OS, one that has a lock-free allocator inside the OS (or at least one that has a known real-time behaviour that is acceptable - it locks for at most X microsecons [X can be less than 1.0]). Real-time systems often have a pool of memory and fixed size buckets for recycling old allocations, which can be done in a lock-free manner - the buckets are a linked list, so you can insert/remove from that list with atomic compare&exchange operations [possibly with retry, so although it's technically lock-free, it's not zero wait time in contended situations].

  • Another solution that can work is to have "per thread pools". This can get a bit complicated if you pass data between threads, but if you either accept that "memory freed for reuse may end up in a different thread" (which of course leads to problems along the lines of "all the memory now sits in that one thread that collects and frees the information from many other threads, and all other threads have run out of memory")

这篇关于动态无锁内存分配器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆