多个互斥锁定策略以及为什么库不使用地址比较 [英] Multiple mutex locking strategies and why libraries don't use address comparison

查看:148
本文介绍了多个互斥锁定策略以及为什么库不使用地址比较的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有一种广为人知的锁定多个锁的方法,其依赖于根据该顺序选择固定的线性顺序和锁定锁。



,在获取两个互斥锁上的锁定并避免死锁的答案中。特别是,基于地址比较的解决方案似乎相当优雅明显



当我试图检查它是如何实际实现,我发现这个解决方案没有被广泛使用。



引用内核文档 - 不可靠的锁定指南


告诉你,如果你总是锁定在同一个顺序,你
永远不会得到这种死锁。实践将告诉你这个
方法不扩展:当我创建一个新的锁,我不明白
足够的内核找出在5000锁层次结构中的
将适合。


PThreads 似乎没有内置这样的机制。 p>

Boost.Thread 提出了
完全不同的解决方案, lock() 多个(2到5个)mutexes是基于尝试和锁定尽可能多的互斥体。



这是Boost.Thread源代码的片段(Boost 1.48.0 ,boost / thread / locks.hpp:1291):

  template< typename MutexType1,typename MutexType2,typename MutexType3& 
void lock(MutexType1& m1,MutexType2& m2,MutexType3& m3)
{
unsigned const lock_count = 3;
unsigned lock_first = 0;
for(;;)
{
switch(lock_first)
{
case 0:
lock_first = detail :: lock_helper(m1,m2,m3 );
if(!lock_first)
return;
break;
case 1:
lock_first = detail :: lock_helper(m2,m3,m1);
if(!lock_first)
return;
lock_first =(lock_first + 1)%lock_count;
break;
case 2:
lock_first = detail :: lock_helper(m3,m1,m2);
if(!lock_first)
return;
lock_first =(lock_first + 2)%lock_count;
break;
}
}
}

其中 lock_helper 会返回 0 关于未成功锁定的互斥锁的成功和数量。



为什么此解决方案更好,而不是比较地址或任何其他类型的ID?我没有看到任何问题的指针比较,这可以避免使用这种盲锁定。



有没有任何其他的想法如何解决这个问题

解决方案


我甚至不确定是否可以证明提供的Boost解决方案的正确性,这似乎比具有线性顺序的解决方案更加棘手。


Boost解决方案不能死锁,因为它从来没有等待,而已经持有锁。所有锁,但第一个锁是通过try_lock获取的。如果任何try_lock调用无法获取其锁,则先前获取的所有锁都将被释放。此外,在Boost实现中,新尝试将从锁无法获取上一次开始,并将首先等待,直到它可用;这是一个聪明的设计决策。



作为一般规则,最好避免阻止呼叫,而持有锁。因此,如果可能,尝试锁的解决方案是首选(在我看来)。作为特定结果,在锁定排序的情况下,系统整体可能被卡住。想象一下,最后一个锁(例如具有最大地址的那个锁)被线程获取,然后被阻塞。现在想象一些其他线程需要最后一个锁和另一个锁,并且由于排序它将首先获得另一个并且将等待最后一个锁。所有其他锁也会发生同样的情况,整个系统在最后一次锁释放之前不会进行任何进度。当然,这是一个极端的,但不太可能的情况,但它说明了锁定排序的固有问题:锁定数越高,锁获得时的间接影响越大。



基于try-lock的解决方案的缺点是它可能导致活锁,在极端情况下,整个系统也可能会卡住至少一段时间。因此,重要的是有一些回退模式,使锁定尝试之间的时间间隔随着时间的推移更长,并且可能是随机的。


There is a widely known way of locking multiple locks, which relies on choosing fixed linear ordering and aquiring locks according to this ordering.

That was proposed, for example, in the answer for "Acquire a lock on two mutexes and avoid deadlock". Especially, the solution based on address comparison seems to be quite elegant and obvious.

When I tried to check how it is actually implemented, I've found, to my surprise, that this solution in not widely used.

To quote the Kernel Docs - Unreliable Guide To Locking:

Textbooks will tell you that if you always lock in the same order, you will never get this kind of deadlock. Practice will tell you that this approach doesn't scale: when I create a new lock, I don't understand enough of the kernel to figure out where in the 5000 lock hierarchy it will fit.

PThreads doesn't seem to have such a mechanism built in at all.

Boost.Thread came up with completely different solution, lock() for multiple (2 to 5) mutexes is based on trying and locking as many mutexes as it is possible at the moment.

This is the fragment of the Boost.Thread source code (Boost 1.48.0, boost/thread/locks.hpp:1291):

template<typename MutexType1,typename MutexType2,typename MutexType3>
void lock(MutexType1& m1,MutexType2& m2,MutexType3& m3)
{    
    unsigned const lock_count=3;
    unsigned lock_first=0;
    for(;;)
    {    
        switch(lock_first)
        {    
        case 0:
            lock_first=detail::lock_helper(m1,m2,m3);
            if(!lock_first)
                return;
            break;
        case 1:
            lock_first=detail::lock_helper(m2,m3,m1);
            if(!lock_first)
                return;
            lock_first=(lock_first+1)%lock_count;
            break;
        case 2:
            lock_first=detail::lock_helper(m3,m1,m2);
            if(!lock_first)
                return;
            lock_first=(lock_first+2)%lock_count;
            break;
        }    
    }    
}    

where lock_helper returns 0 on success and number of mutexes that weren't successfully locked otherwise.

Why is this solution better, than comparing addresses or any other kind of ids? I don't see any problems with pointer comparison, which can be avoided using this kind of "blind" locking.

Are there any other ideas on how to solve this problem on a library level?

解决方案

From the bounty text:

I'm not even sure if I can prove correctness of the presented Boost solution, which seems more tricky than the one with linear order.

The Boost solution cannot deadlock because it never waits while already holding a lock. All locks but the first are acquired with try_lock. If any try_lock call fails to acquire its lock, all previously acquired locks are freed. Also, in the Boost implementation the new attempt will start from the lock failed to acquire the previous time, and will first wait till it is available; it's a smart design decision.

As a general rule, it's always better to avoid blocking calls while holding a lock. Therefore, the solution with try-lock, if possible, is preferred (in my opinion). As a particular consequence, in case of lock ordering, the system at whole might get stuck. Imagine the very last lock (e.g. the one with the biggest address) was acquired by a thread which was then blocked. Now imagine some other thread needs the last lock and another lock, and due to ordering it will first get the other one and will wait on the last lock. Same can happen with all other locks, and the whole system makes no progress until the last lock is released. Of course it's an extreme and rather unlikely case, but it illustrates the inherent problem with lock ordering: the higher a lock number the more indirect impact the lock has when acquired.

The shortcoming of the try-lock-based solution is that it can cause livelock, and in extreme cases the whole system might also get stuck for at least some time. Therefore it is important to have some back-off schema that make pauses between locking attempts longer with time, and perhaps randomized.

这篇关于多个互斥锁定策略以及为什么库不使用地址比较的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆