使用 HT 在一个 Core 上执行的线程之间的数据交换将使用什么? [英] What will be used for data exchange between threads are executing on one Core with HT?

查看:20
本文介绍了使用 HT 在一个 Core 上执行的线程之间的数据交换将使用什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

<块引用>

解决方案

我想你会得到 L1 的往返行程.(与 store->load forwarding 在单个线程,甚至比那更快.)

Intel 的优化手册说存储和加载缓冲区在线程之间静态分区,这告诉我们很多关于它如何工作的信息.我还没有测试过大部分内容,所以如果我的预测与实验不符,请告诉我.

更新:有关吞吐量和延迟的一些实验性测试,请参阅此问答.

<小时>

存储必须在写入线程中退休,然后从 存储缓冲区提交到 L1/queue 一段时间后.那时它对另一个线程是可见的,并且从任一线程加载到该地址应该在 L1 中命中.在此之前,另一个线程应该使用旧数据获得 L1 命中,存储线程应该通过 store->load forwarding 获得存储的数据.

当存储 uop 执行时,存储数据进入存储缓冲区,但它不能提交到 L1,直到已知它是非推测性的,即它退休.但是存储缓冲区也将退出从 ROB(乱序核心中的重新排序缓冲区)与对 L1 的承诺分离,这对于在缓存中未命中的存储非常有用.无序内核可以继续工作,直到存储缓冲区填满.

<小时>

如果两个线程使用超线程在同一个内核上运行,则它们可以看到 StoreLoad 重新排序,如果它们不使用内存栅栏,因为存储转发不会在线程之间发生.可以使用Jeff Preshing's Memory Reordering Caught in the Act 代码为了在实践中进行测试,使用 CPU 亲和性在同一物理内核的不同逻辑 CPU 上运行线程.

一个原子读-修改-写操作使其存储全局可见(提交到 L1)作为其执行的一部分,否则它不会是原子的.只要数据不跨越缓存行之间的边界,它就可以锁定该缓存行.(AFAIK 这就是 CPU 通常如何实现原子 RMW 操作,例如 lock add [mem], 1lock cmpxchg [mem], rax.)

无论哪种方式,一旦完成,数据将在内核的 L1 缓存中很热,其中任何一个线程都可以通过加载它来获得缓存命中.

我怀疑两个超线程对共享计数器(或任何其他 lock ed 操作,如 xchg [mem], eax)执行原子增量会实现大致相同作为单个线程的吞吐量.这比在单独的物理内核上运行的两个线程要高得多,其中缓存线必须在两个内核的 L1 缓存之间(通过 L3)反弹.

movNT(非临时)弱排序存储绕过缓存,并将它们的数据放入行填充缓冲区.如果开始时缓存中的行很热,他们也会从 L1 中驱逐该行.在数据进入填充缓冲区之前,它们可能必须退出,因此来自其他线程的加载可能根本看不到它,直到它进入填充缓冲区.那么它可能与 movnt 存储相同,然后是单个线程内的负载.(即到 DRAM 的往返,几百个周期的延迟).不要将 NT 存储用于您希望另一个线程立即读取的一小段数据.

<小时>

L1 命中是可能的,因为 Intel CPU 共享 L1 缓存的方式. Intel 使用 虚拟索引、物理标记 (VIPT) 在他们的大多数(全部?)设计中使用 L1 缓存.(例如 Sandybridge 系列.)但是由于索引位(选择一组8 个标签)低于页面偏移量,它的行为与 PIPT 缓存完全一样(将其视为低 12 位的无操作转换),但具有 VIPT 缓存的速度优势:它可以获取标签从一组与 TLB 查找并行转换高位.请参阅此答案中的L1 也使用速度技巧,如果它更大则不起作用"段落.

由于 L1d 缓存的行为类似于 PIPT,并且相同的物理地址实际上意味着相同的内存,因此无论是同一进程的 2 个线程具有相同的缓存行虚拟地址,还是两个独立的进程都没有关系将一块共享内存映射到每个进程中的不同地址.这就是为什么 L1d 可以(并且现在)被两个超线程竞争而没有误报缓存命中的风险.与需要使用核心 ID 标记其条目的 dTLB 不同.

此答案的先前版本在此处有一段基于 Skylake 降低 L1 关联性的错误想法.Skylake 的 L2 是 4 路,而 Broadwell 和更早版本是 8 路.尽管如此,讨论最近的答案可能会引起兴趣.

<小时>

英特尔的 x86 手册 vol3,第 11.5.6 章 记录了 Netburst (P4) 可以选择以这种方式工作.默认为自适应模式",允许核心内的逻辑处理器共享数据.

有一种共享模式":

<块引用>

在共享模式下,L1 数据缓存在逻辑处理器之间竞争共享.这是真的,即使逻辑处理器使用相同的 CR3 寄存器和分页模式.

在共享模式下,L1 数据缓存中的线性地址可以被别名,这意味着缓存中的一个线性地址可以指向不同的物理位置.解决混叠的机制可能会导致抖动.为了这原因,IA32_MISC_ENABLE[bit 24] = 0 是基于 Intel NetBurst 的处理器的首选配置支持英特尔超线程技术的微架构

它没有说明 Nehalem/SnB uarch 中的超线程,所以我假设他们在另一个 uarch 中引入 HT 支持时没有包括慢速模式"支持,因为他们知道他们已经快速"模式"以在 netburst 中正常工作.我有点想知道这个模式位是否只存在,以防他们发现错误并且不得不通过微码更新禁用它.

本答案的其余部分仅针对 P4 的正常设置,我很确定这也是 Nehalem 和 SnB 系列 CPU 的工作方式.

<小时>

理论上可以构建一个 OOO SMT CPU 内核,使一个线程中的存储在退出后立即对另一个线程可见,但在它们离开存储缓冲区并提交到 L1d 之前(即在它们成为全球可见之前).这不是英特尔的设计工作方式,因为它们静态地划分存储队列而不是竞争性地共享它.

即使线程共享一个存储缓冲区,也不允许线程之间为尚未退休的存储进行存储转发,因为此时它们仍处于推测状态.这会将两个线程联系在一起,以防止分支错误预测和其他回滚.

为多个硬件线程使用共享存储队列将需要额外的逻辑来始终转发到来自同一线程的加载,但仅将停用的存储转发到来自其他线程的加载.除了晶体管数量外,这可能会产生很大的电力成本.对于非退休商店,您不能完全省略商店转发,因为这会破坏单线程代码.

某些 POWER CPU 实际上可能会这样做;对于并非所有线程都同意单一的全球商店订单,这似乎是最可能的解释.将两个其他线程总是以相同的顺序看到对不同线程中不同位置的原子写入?.

正如@BeeOnRope 指出的,这不适用于 x86 CPU,仅适用于不保证 总商店订单,因为这会让 SMT 兄弟在您的商店成为全球之前看到它 对其他内核可见.

可以通过将来自同级存储缓冲区的数据视为推测来保留 TSO,或者在任何缓存未命中加载之前无法发生(因为在 L1D 缓存中保持热线的行不能包含来自其他内核的新存储).IDK,我还没有完全考虑清楚.这似乎过于复杂,可能无法在维护 TSO 的同时进行有用的转发,甚至超出共享存储缓冲区或探测同级存储缓冲区的复杂性.

Hyper-Threading Technology is a form of simultaneous multithreading technology introduced by Intel.

These resources include the execution engine, caches, and system bus interface; the sharing of resources allows two logical processors to work with each other more efficiently, and allows a stalled logical processor to borrow resources from the other one.

In the Intel CPU with Hyper-Threading, one CPU-Core (with several ALUs) can execute instructions from 2 threads at the same clock. And both 2 threads share: store-buffer, caches L1/L2 and system bus.

But if two thread execute simultaneous on one Core, thread-1 stores atomic value and thread-2 loads this value, what will be used for this exchange: shared store-buffer, shared cache L1 / L2 or as usual cache L3?

What will be happen if both 2 threads from one the same process (the same virtual address space) and if from two different processes (the different virtual address space)?

Sandy Bridge Intel CPU - cache L1:

  • 32 KB - cache size

  • 64 B - cache line size

  • 512 - lines (512 = 32 KB / 64 B)

  • 8-way

  • 64 - number sets of ways (64 = 512 lines / 8-way)

  • 6 bits [11:6] - of virtual address (index) defines current set number (this is tag)

  • 4 K - each the same (virtual address / 4 K) compete for the same set (32 KB / 8-way)

  • low 12 bits - significant for determining the current set number

  • 4 KB - standard page size

  • low 12 bits - the same in virtual and physical addresses for each address

解决方案

I think you'll get a round-trip to L1. (Not the same thing as store->load forwarding within a single thread, which is even faster than that.)

Intel's optimization manual says that store and load buffers are statically partitioned between threads, which tells us a lot about how this will work. I haven't tested most of this, so please let me know if my predictions aren't matching up with experiment.

Update: See this Q&A for some experimental testing of throughput and latency.


A store has to retire in the writing thread, and then commit to L1 from the store buffer/queue some time after that. At that point it will be visible to the other thread, and a load to that address from either thread should hit in L1. Before that, the other thread should get an L1 hit with the old data, and the storing thread should get the stored data via store->load forwarding.

Store data enters the store buffer when the store uop executes, but it can't commit to L1 until it's known to be non-speculative, i.e. it retires. But the store buffer also de-couples retirement from the ROB (the ReOrder Buffer in the out-of-order core) vs. commitment to L1, which is great for stores that miss in cache. The out-of-order core can keep working until the store buffer fills up.


Two threads running on the same core with hyperthreading can see StoreLoad re-ordering if they don't use memory fences, because store-forwarding doesn't happen between threads. Jeff Preshing's Memory Reordering Caught in the Act code could be used to test for it in practice, using CPU affinity to run the threads on different logical CPUs of the same physical core.

An atomic read-modify-write operation has to make its store globally visible (commit to L1) as part of its execution, otherwise it wouldn't be atomic. As long as the data doesn't cross a boundary between cache lines, it can just lock that cache line. (AFAIK this is how CPUs do typically implement atomic RMW operations like lock add [mem], 1 or lock cmpxchg [mem], rax.)

Either way, once it's done the data will be hot in the core's L1 cache, where either thread can get a cache hit from loading it.

I suspect that two hyperthreads doing atomic increments to a shared counter (or any other locked operation, like xchg [mem], eax) would achieve about the same throughput as a single thread. This is much higher than for two threads running on separate physical cores, where the cache line has to bounce between the L1 caches of the two cores (via L3).

movNT (Non-Temporal) weakly-ordered stores bypass the cache, and put their data into a line-fill buffer. They also evict the line from L1 if it was hot in cache to start with. They probably have to retire before the data goes into a fill buffer, so a load from the other thread probably won't see it at all until it enters a fill-buffer. Then probably it's the same as an movnt store followed by a load inside a single thread. (i.e. a round-trip to DRAM, a few hundred cycles of latency). Don't use NT stores for a small piece of data you expect another thread to read right away.


L1 hits are possible because of the way Intel CPUs share the L1 cache. Intel uses virtually indexed, physically tagged (VIPT) L1 caches in most (all?) of their designs. (e.g. the Sandybridge family.) But since the index bits (which select a set of 8 tags) are below the page-offset, it behaves exactly like a PIPT cache (think of it as translation of the low 12 bits being a no-op), but with the speed advantage of a VIPT cache: it can fetch the tags from a set in parallel with the TLB lookup to translate the upper bits. See the "L1 also uses speed tricks that wouldn't work if it was larger" paragraph in this answer.

Since L1d cache behaves like PIPT, and the same physical address really means the same memory, it doesn't matter whether it's 2 threads of the same process with the same virtual address for a cache line, or whether it's two separate processes mapping a block of shared memory to different addresses in each process. This is why L1d can be (and is) competitively by both hyperthreads without risk of false-positive cache hits. Unlike the dTLB, which needs to tag its entries with a core ID.

A previous version of this answer had a paragraph here based on the incorrect idea that Skylake had reduced L1 associativity. It's Skylake's L2 that's 4-way, vs. 8-way in Broadwell and earlier. Still, the discussion on a more recent answer might be of interest.


Intel's x86 manual vol3, chapter 11.5.6 documents that Netburst (P4) has an option to not work this way. The default is "Adaptive mode", which lets logical processors within a core share data.

There is a "shared mode":

In shared mode, the L1 data cache is competitively shared between logical processors. This is true even if the logical processors use identical CR3 registers and paging modes.

In shared mode, linear addresses in the L1 data cache can be aliased, meaning that one linear address in the cache can point to different physical locations. The mechanism for resolving aliasing can lead to thrashing. For this reason, IA32_MISC_ENABLE[bit 24] = 0 is the preferred configuration for processors based on the Intel NetBurst microarchitecture that support Intel Hyper-Threading Technology

It doesn't say anything about this for hyperthreading in Nehalem / SnB uarches, so I assume they didn't include "slow mode" support when they introduced HT support in another uarch, since they knew they'd gotten "fast mode" to work correctly in netburst. I kinda wonder if this mode bit only existed in case they discovered a bug and had to disable it with microcode updates.

The rest of this answer only addresses the normal setting for P4, which I'm pretty sure is also the way Nehalem and SnB-family CPUs work.


It would be possible in theory to build an OOO SMT CPU core that made stores from one thread visible to the other as soon as they retired, but before they leaves the store buffer and commit to L1d (i.e. before they become globally visible). This is not how Intel's designs work, since they statically partition the store queue instead of competitively sharing it.

Even if the threads shared one store-buffer, store forwarding between threads for stores that haven't retired yet couldn't be allowed because they're still speculative at that point. That would tie the two threads together for branch mispredicts and other rollbacks.

Using a shared store queue for multiple hardware threads would take extra logic to always forward to loads from the same thread, but only forward retired stores to loads from the other thread(s). Besides transistor count, this would probably have a significant power cost. You couldn't just omit store-forwarding entirely for non-retired stores, because that would break single-threaded code.

Some POWER CPUs may actually do this; it seems like the most likely explanation for not all threads agreeing on a single global order for stores. Will two atomic writes to different locations in different threads always be seen in the same order by other threads?.

As @BeeOnRope points out, this wouldn't work for an x86 CPU, only for an ISA that doesn't guarantee a Total Store Order, because this this would let the SMT sibling(s) see your store before it becomes globally visible to other cores.

TSO could maybe be preserved by treating data from sibling store-buffers as speculative, or not able to happen before any cache-miss loads (because lines that stay hot in your L1D cache can't contain new stores from other cores). IDK, I haven't thought this through fully. It seems way overcomplicated and probably not able to do useful forwarding while maintaining TSO, even beyond the complications of having a shared store-buffer or probing sibling store-buffers.

这篇关于使用 HT 在一个 Core 上执行的线程之间的数据交换将使用什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆