std :: mutex是否顺序一致? [英] Is std::mutex sequentially consistent?

查看:118
本文介绍了std :: mutex是否顺序一致?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

说,我有两个线程AB分别写入全局布尔变量fAfB,它们最初设置为false并受std::mutex对象mAmB分别:

// Thread A
mA.lock();
assert( fA == false );
fA = true;
mA.unlock();

// Thread B
mB.lock()
assert( fB == false );
fB = true;
mB.unlock()

是否可以在不同的线程CD中以不同的顺序观察fAfB的修改?换句话说,下面的程序可以

#include <atomic>
#include <cassert>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std;

mutex mA, mB, coutMutex;
bool fA = false, fB = false;

int main()
{
    thread A{ []{
            lock_guard<mutex> lock{mA};
            fA = true;
        } };
    thread B{ [] {
            lock_guard<mutex> lock{mB};
            fB = true;
        } };
    thread C{ [] { // reads fA, then fB
            mA.lock();
            const auto _1 = fA;
            mA.unlock();
            mB.lock();
            const auto _2 = fB;
            mB.unlock();
            lock_guard<mutex> lock{coutMutex};
            cout << "Thread C: fA = " << _1 << ", fB = " << _2 << endl;
        } };
    thread D{ [] { // reads fB, then fA (i. e. vice versa)
            mB.lock();
            const auto _3 = fB;
            mB.unlock();
            mA.lock();
            const auto _4 = fA;
            mA.unlock();
            lock_guard<mutex> lock{coutMutex};
            cout << "Thread D: fA = " << _4 << ", fB = " << _3 << endl;
        } };
    A.join(); B.join(); C.join(); D.join();
}

合法打印

Thread C: fA = 1, fB = 0
Thread D: fA = 0, fB = 1

根据C ++标准吗?

注意:可以使用std::atomic<bool>变量使用顺序一致的内存顺序或获取/释放内存顺序来实现自旋锁.因此,问题是std::mutex的行为是否像顺序一致的自旋锁或获取/释放内存顺序自旋锁.

解决方案

是的,允许不可能输出,但是std::mutex不一定顺序一致.获取/释放足以排除该行为.

std::mutex在标准中未定义为顺序一致,只是

30.4.1.2互斥类型[thread.mutex.requirements.mutex]

11同步:同一对象上的先前unlock()操作应 与(1.10)同步此操作 [lock()] .

的定义似乎与std::memory_order::release/acquire相同(请参见此处,这是一种微妙的观点.起初看来似乎不可能,但我认为对这种事情保持谨慎是正确的.

根据标准,unlock() lock()同步.这意味着 unlock()之前发生的任何事情在lock()之后可见. 之前发生(此后为->)是一个有点怪异的关系,在上面的链接中有更好的解释,但是由于此示例中的所有内容周围都有互斥锁,因此一切都按您期望的方式运行,即const auto _1 = fA; happens const auto _2 = fB;之前,并且unlock()互斥锁时线程可见的任何更改对于lock()互斥锁的下一个线程可见.它还具有一些预期的属性,例如如果X发生在Y之前,并且Y发生在Z之前,则X-> Z,如果X发生在Y之前,那么Y也不会发生在X之前.

从这里不难看出看起来直观上正确的矛盾.

简而言之,每个互斥对象都有明确定义的操作顺序-例如对于互斥锁A,线程A,C,D按一定顺序持有锁.为了使线程D打印fA = 0,它必须在线程A之前锁定mA,反之亦然,对于线程C. mA的锁定顺序为D(mA)-> A(mA)-> C(mA).

对于互斥锁B,序列必须为C(mB)-> B(mB)-> D(mB).

但是从程序中我们知道C(mA)-> C(mB),因此让我们将两者放在一起得到D(mA)-> A(mA)-> C(mA)-> C(mB )-> B(mB)-> D(mB),表示D(mA)-> D(mB).但是代码也给了我们D(mB)-> D(mA),这是矛盾的,这意味着您观察到的输出是不可能的.

对于获取/释放自旋锁,此结果没有什么不同,我认为每个人都将对变量的常规获取/释放内存访问与对受自旋锁保护的变量的访问混淆了.区别在于,使用自旋锁时,读取线程还执行比较/交换和释放写入,这与单个释放写入和获取读取是完全不同的方案.

如果您使用顺序一致的自旋锁,则不会影响输出.唯一的区别是,您始终可以从没有获得任何锁的单独线程中分类回答互斥锁A在互斥锁B之前被锁定"之类的问题.但是对于这个例子和大多数其他例子,这种陈述是没有用的,因此获取/发布是标准.

Say, I have two threads A and B writing to a global Boolean variables fA and fB respectively which are initially set to false and are protected by std::mutex objects mA and mB respectively:

// Thread A
mA.lock();
assert( fA == false );
fA = true;
mA.unlock();

// Thread B
mB.lock()
assert( fB == false );
fB = true;
mB.unlock()

Is it possible to observe the modifications on fA and fB in different orders in different threads C and D? In other words, can the following program

#include <atomic>
#include <cassert>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std;

mutex mA, mB, coutMutex;
bool fA = false, fB = false;

int main()
{
    thread A{ []{
            lock_guard<mutex> lock{mA};
            fA = true;
        } };
    thread B{ [] {
            lock_guard<mutex> lock{mB};
            fB = true;
        } };
    thread C{ [] { // reads fA, then fB
            mA.lock();
            const auto _1 = fA;
            mA.unlock();
            mB.lock();
            const auto _2 = fB;
            mB.unlock();
            lock_guard<mutex> lock{coutMutex};
            cout << "Thread C: fA = " << _1 << ", fB = " << _2 << endl;
        } };
    thread D{ [] { // reads fB, then fA (i. e. vice versa)
            mB.lock();
            const auto _3 = fB;
            mB.unlock();
            mA.lock();
            const auto _4 = fA;
            mA.unlock();
            lock_guard<mutex> lock{coutMutex};
            cout << "Thread D: fA = " << _4 << ", fB = " << _3 << endl;
        } };
    A.join(); B.join(); C.join(); D.join();
}

legally print

Thread C: fA = 1, fB = 0
Thread D: fA = 0, fB = 1

according to the C++ standard?

Note: A spin-lock can be implemented using std::atomic<bool> variables using either sequential consistent memory order or acquire/release memory order. So the question is whether an std::mutex behaves like a sequentially consistent spin-lock or an acquire/release memory order spin-lock.

解决方案

Yes, that is allowed That output isn't possible, but std::mutex is not necessarily sequentially consistent. Acquire/release is enough to rule out that behaviour.

std::mutex is not defined in the standard to be sequentially consistent, only that

30.4.1.2 Mutex types [thread.mutex.requirements.mutex]

11 Synchronization: Prior unlock() operations on the same object shall synchronize with (1.10) this operation [lock()].

Synchronize-with seems to be defined in the same was as std::memory_order::release/acquire (see this question).
As far as I can see, an acquire/release spinlock would satisfy the standards for std::mutex.

Big edit:

However, I don't think that means what you think (or what I thought). The output is still not possible, since acquire/release semantics are enough to rule it out. This is a kind of subtle point that is better explained here. It seems obviously impossible at first but I think it's right to be cautious with stuff like this.

From the standard, unlock() synchronises with lock(). That means anything that happens before unlock() is visible after lock(). Happens before (henceforth ->) is a slightly weird relation explained better in the above link, but because there's mutexes around everything in this example, everything works like you expect, i.e. const auto _1 = fA; happens before const auto _2 = fB;, and any changes visible to a thread when it unlock()s the mutex are visible to the next thread that lock()s the mutex. Also it has some expected properties, e.g. if X happens before Y and Y happens before Z, then X -> Z, also if X happens before Y then Y doesn't happen before X.

From here it's not hard to see the contradiction that seems intuitively correct.

In short, there's a well defined order of operations for each mutex - e.g. for mutex A, threads A, C, D hold the locks in some sequence. For thread D to print fA=0, it must lock mA before thread A, vice versa for thread C. So the lock sequence for mA is D(mA) -> A(mA) -> C(mA).

For mutex B the sequence must be C(mB) -> B(mB) -> D(mB).

But from the program we know C(mA) -> C(mB), so that lets us put both together to get D(mA) -> A(mA) -> C(mA) -> C(mB) -> B(mB) -> D(mB), which means D(mA) -> D(mB). But the code also gives us D(mB) -> D(mA), which is a contradiction, meaning your observed output is not possible.

This outcome is no different for an acquire/release spinlock, I think everyone was confusing regular acquire/release memory access on a variable with access to a variable protected by a spinlock. The difference is that with a spinlock, the reading threads also perform a compare/exchange and a release write, which is a completely different scenario to a single release write and acquire read.

If you used a sequentially consistent spinlock then this wouldn't affect the output. The only difference is that you could always categorically answer questions like "mutex A was locked before mutex B" from a separate thread that didn't acquire either lock. But for this example and most others, that kind of statement isn't useful, hence acquire/release being the standard.

这篇关于std :: mutex是否顺序一致?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆