x86上的原子性 [英] Atomicity on x86

查看:441
本文介绍了x86上的原子性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述


8.1.2总线锁定



Intel 64和IA-32处理器提供一个LOCK#信号,在某些关键的内存操作期间锁定
系统总线或等效链路。当该输出信号被断言时,来自其它处理器或总线代理的总线
控制的
请求被阻塞。软件可以指定其他情况下的LOCK
语义后面跟着LOCK前缀到
指令。


它来自英特尔手册,第3卷



这听起来像内存上的原子操作将直接在内存(RAM)上执行。我很困惑,因为我看到没有什么特别,当我分析程序集输出。基本上, std :: atomic< int> X; X.load()只放置extramfence。但是,它负责正确的内存排序,而不是原子性。如果我理解正确, X.store(2)只是 mov [somewhere],$ 2 。就这样。它似乎没有跳过缓存。我知道移动对齐(例如int)到内存是原子的。但是,我很困惑。






因此,我提出了疑问,但主要问题是:



CPU如何在内部实现原子操作


解决方案

p>这听起来像内存上的原子操作将直接在内存(RAM)上执行。


不,观察者在系统中将操作看作是原子的,操作可以仅涉及高速缓存。满足这个要求对于原子读取 - 修改 - 写入操作(例如 lock add [mem],eax ,特别是对于未对准的地址)更加困难,可以断言LOCK#信号。你仍然看不到任何更多的在asm:硬件实现ISA所需的语义 lock ed指令。



虽然我怀疑现代CPU上存在物理外部LOCK#引脚,其中内存控制器是内置在CPU中,而不是在单独的



这意味着CPU硬件可以做任何必要的,以确保存储或加载是相对于系统中的任何东西都可以观察到,这可能不是很多,如果有什么DDR存储器使用足够宽的数据总线,64位对齐的存储真正做电存储器总线到DRAM的所有相同的周期(有趣的事实,但不重要。像PCIe这样的串行总线协议不会阻止它是原子的,只要单个消息足够大。并且由于存储器控制器是能够直接与DRAM通信的唯一的东西,它在内部做什么都不重要,只是它和CPU的其余部分之间的传输的大小)。但无论如何,这是免费部分:不需要临时阻止其他请求来保持原子转移原子。



a href =http://stackoverflow.com/questions/36624881/why-is-integer-assignment-on-a-naturally-aligned-variable-atomic/36685056#36685056> x86保证对齐的加载和存储达到64位是原子的,但不是更宽的访问。低功耗实现可以将矢量加载/存储分解成64位块,如P6从PIII到Pentium M。






原子操作发生在缓存中



请记住,原子意味着所有的观察者都认为它发生了或没有发生,从来没有发生过。没有要求它实际上到达主存储器(或者,如果覆盖很快)。 以原子方式修改或读取L1缓存足以确保任何其他核心或DMA访问将看到对齐的存储或加载作为单个原子操作发生。如果此修改在存储执行后发生很久,



像Core2这样的128位路径的现代CPU通常具有原子SSE 128b加载/存储,正在进行超越x86 ISA保证。但请注意有趣的例外情况,多个-socket Opteron可能是由于hypertransport。这就是说,原子修改L1缓存不足以为最宽的数据路径提供原子性(在这种情况下不是L1缓存和执行之间的路径单位)。



对齐很重要:跨越缓存行边界的加载或存储必须在两次单独访问中完成。这使它非原子。



x86保证高达64b的缓存访问是原子的,只要它们不跨越缓存行边界,在P6及以上。这意味着整个高速缓存线(现代CPU上的64B)以原子方式传输,即使它比数据路径宽。这种原子性在硬件中不是完全自由的,并且可能需要一些额外的逻辑来防止加载读取仅部分传输的高速缓存行。虽然高速缓存行传输只发生在旧版本无效之后,所以当发生传输时,核心不应该从旧的副本读取。 IDK如何协调这与多插槽Opteron似乎只展示非原子性的128b SSE加载/存储,同时在缓存之间传输。可能Intel和AMD有区别吗?



对于更宽的操作数,例如将新数据以原子方式写入结构的多个条目,需要使用锁来保护它,对它尊重。 (您可以使用x86 lock cmpxchg16b 与重试循环来执行原子16b存储。请注意没有办法模拟它没有锁。)








(此部分尚未完成,但在睡觉之前已发布。)



每个核心都有一个私有L1缓存,与所有其他内核一致(使用 MOESI 协议)。高速缓存行在大小从64位到256位的块中在高速缓存和主存储器的级之间传送。 (这些传输实际上可以在整个高速缓存行粒度上是原子的)



为了做一个原子RMW,核心可以将一行L1缓存保持在修改状态而不接受对加载和存储之间的受影响的高速缓存行的任何外部修改,系统的其余部分将把操作看作是原子的。 (因此它是原子的,因为通常的乱序执行规则要求本地线程将其自己的代码看作以程序顺序运行。)



这可以通过在原子RMW在运行时不处理任何缓存一致性消息来实现(或者一些更复杂的版本,允许更多的并行操作)。



unaligned lock ed操作是一个问题:我们需要其他内核,以查看对两个缓存线的修改作为单个原子操作。 可能需要实际存储到DRAM,并采取总线锁定。 (AMD的优化手册说,这是在他们的CPU上发生了什么。)


8.1.2 Bus Locking

Intel 64 and IA-32 processors provide a LOCK# signal that is asserted automatically during certain critical memory operations to lock the system bus or equivalent link. While this output signal is asserted, requests from other processors or bus agents for control of the bus are blocked. Software can specify other occasions when the LOCK semantics are to be followed by prepending the LOCK prefix to an instruction.

It comes from Intel Manual, Volume 3

It sounds like the atomic operations on memory will be executed directly on memory (RAM). I am confused because I see "nothing special" when I analyze assembly output. Basically, the assembly output generated for std::atomic<int> X; X.load() puts only "extra" mfence. But, it is responsible for proper memory ordering, not for an atomicity. If I understand properly the X.store(2) is just mov [somewhere], $2. And that's all. It seems that it doesn't "skip" the cache. I know that moving aligned ( for example ints) to memory is atomic. However, I am confused.


So, I have presented my doubts but the main question is:

How does the CPU implement atomic operations internally?

解决方案

It sounds like the atomic operations on memory will be executed directly on memory (RAM).

Nope, as long as every possible observer in the system sees the operation as atomic, the operation can involve cache only. Satisfying this requirement is much more difficult for atomic read-modify-write operations (like lock add [mem], eax, especially with an unaligned address), which is when a CPU might assert the LOCK# signal. You still wouldn't see any more than that in the asm: the hardware implements the ISA-required semantics for locked instructions.

Although I doubt that there is a physical external LOCK# pin on modern CPUs where the memory controller is built-in to the CPU, instead of in a separate northbridge chip.


std::atomic<int> X; X.load() puts only "extra" mfence.

Compilers don't MFENCE for seq_cst loads. I think I read that MSVC did emit MFENCE for this (maybe to prevent reordering with unfenced NT stores?), but it doesn't anymore: I just tested MSVC 19.00.23026.0. Look for foo and bar in the asm output from this program that dumps its own asm in an online compile&run site.

I think the reason we don't need a fence here is that the x86 memory model disallows both LoadStore and LoadLoad reordering. Earlier (non seq_cst) stores can still be delayed until after a seq_cst load, so it's different from using a stand-alone std::atomic_thread_fence(mo_seq_cst); before an X.load(mo_acquire);

If I understand properly the X.store(2) is just mov [somewhere], 2

No, seq_cst stores do require a full memory-barrier instruction, to disallow StoreLoad reordering which could otherwise happen.

MSVC's asm for stores is the same as clang's, using xchg to do the store and a memory barrier with the same instruction. (On some CPUs, especially AMD, a locked instruction as a barrier may be cheaper than MFENCE because IIRC AMD documents extra serialize-the-pipeline semantics (for instruction execution, not just memory ordering) for MFENCE).


This question looks like the part 2 of your earlier Memory Model in C++ : sequential consistency and atomicity, where you asked:

How does the CPU implement atomic operations internally?

As you pointed out in the question, atomicity is unrelated to ordering with respect to any other operations. (i.e. memory_order_relaxed). It just means that the operation happens as a single indivisible operation, hence the name, not as multiple parts which can happen partially before and partially after something else.

You get atomicity "for free" with no extra hardware for aligned loads or stores up to the size of the data paths between cores, memory, and I/O busses like PCIe. i.e. between the various levels of cache, and between the caches of separate cores. The memory controllers are part of the CPU in modern designs, so even a PCIe device accessing memory has to go through the CPU's system agent. (This even lets Skylake's eDRAM L4 (not available in any desktop CPUs :( ) work as a memory-side cache (unlike Broadwell, which used it as a victim cache for L3 IIRC), sitting between memory and everything else in the system so it can even cache DMA).

This means the CPU hardware can do whatever is necessary to make sure a store or load is atomic with respect to anything else in the system which can observe it. This is probably not much, if anything. DDR memory uses a wide enough data bus that a 64bit aligned store really does electrically go over the memory bus to the DRAM all in the same cycle. (fun fact, but not important. A serial bus protocol like PCIe wouldn't stop it from being atomic, as long as a single message is big enough. And since the memory controller is the only thing that can talk to the DRAM directly, it doesn't matter what it does internally, just the size of transfers between it and the rest of the CPU). But anyway, this is the "for free" part: no temporary blocking of other requests is needed to keep an atomic transfer atomic.

x86 guarantees that aligned loads and stores up to 64 bits are atomic, but not wider accesses. Low-power implementations are free to break up vector loads/stores into 64-bit chunks like P6 did from PIII until Pentium M.


Atomic ops happen in cache

Remember that atomic just means all observers see it as having happened or not happened, never partially-happened. There's no requirement that it actually reaches main memory right away (or at all, if overwritten soon). Atomically modifying or reading L1 cache is sufficient to ensure that any other core or DMA access will see an aligned store or load happen as a single atomic operation. It's fine if this modification happens long after the store executes (e.g. delayed by out-of-order execution until the store retires).

Modern CPUs like Core2 with 128-bit paths everywhere typically have atomic SSE 128b loads/stores, going beyond what the x86 ISA guarantees. But note the interesting exception on a multi-socket Opteron probably due to hypertransport. That's proof that atomically modifying L1 cache isn't sufficient to provide atomicity for stores wider than the narrowest data path (which in this case isn't the path between L1 cache and the execution units).

Alignment is important: A load or store that crosses a cache-line boundary has to be done in two separate accesses. This makes it non-atomic.

x86 guarantees that cached accesses up to 64b are atomic as long as they don't cross a cache-line boundary, on P6 and later. This implies that whole cache lines (64B on modern CPUs) are transferred around atomically, even though that's wider than the data paths. This atomicity isn't totally "free" in hardware, and maybe requires some extra logic to prevent a load from reading a cache-line that's only partially transferred. Although cache-line transfers only happen after the old version was invalidated, so a core shouldn't be reading from the old copy while there's a transfer happening. IDK how to reconcile this with the multi-socket Opteron which seems to only exhibit non-atomicity for 128b SSE loads/stores while transferring them between caches. Possibly Intel and AMD differ on this?

For wider operands, like atomically writing new data into multiple entries of a struct, you need to protect it with a lock which all accesses to it respect. (You may be able to use x86 lock cmpxchg16b with a retry loop to do an atomic 16b store. Note that there's no way to emulate it without a lock.)


Atomic read-modify-write is where it gets harder

(This section isn't finished yet, but posting now before I sleep.)

Each core has a private L1 cache which is coherent with all other cores (using the MOESI protocol). Cache-lines are transferred between levels of cache and main memory in chunks ranging in size from 64 bits to 256 bits. (these transfers may actually be atomic on a whole-cache-line granularity?)

To do an atomic RMW, a core can keep a line of L1 cache in Modified state without accepting any external modifications to the affected cache line between the load and the store, the rest of the system will see the operation as atomic. (And thus it is atomic, because the usual out-of-order execution rules require that the local thread sees its own code as having run in program order.)

It can do this by not processing any cache-coherency messages while the atomic RMW is in-flight (or some more complicated version of this which allows more parallelism for other ops).

unaligned locked ops are a problem: we need other cores to see modifications to two cache lines happen as a single atomic operation. This may require actually storing to DRAM, and taking a bus lock. (AMD's optimization manual says this is what happens on their CPUs.)

这篇关于x86上的原子性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆