多核 Intel CPU 中的高速缓存如何共享? [英] How are cache memories shared in multicore Intel CPUs?

查看:32
本文介绍了多核 Intel CPU 中的高速缓存如何共享?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有几个关于多核 CPU 或多处理器系统中使用的高速缓存存储器的问题.(虽然与编程没有直接关系,但在为多核处理器/多处理器系统编写软件时会产生很多影响,因此在这里问!)

  1. 在多处理器系统或多核处理器(英特尔四核、酷睿二双核等)中,每个 cpu 核心/处理器是否有自己的高速缓存(数据和程序高速缓存)?

    李>
  2. 一个处理器/核心能否访问彼此的缓存,因为如果允许它们访问彼此的缓存,那么我相信在该特定处理器缓存没有的情况下,缓存未命中可能会更少一些数据,但其他一些第二个处理器的缓存可能有它,从而避免从内存读取到第一个处理器的缓存中?这个假设是否有效且正确?

  3. 允许任何处理器访问其他处理器的缓存内存会有什么问题吗?

解决方案

在多处理器系统或多核处理器(英特尔四核、Core 2 Duo 等)是否每个 cpu 核心/处理器都有自己的缓存内存(数据和程序缓存)?

  1. 是的.它因具体的芯片型号而异,但最常见的设计是每个 CPU 内核都有自己的私有 L1 数据和指令缓存.

    在旧 CPU 和/或低功耗 CPU 上,下一级缓存通常是 L2 统一缓存,通常在所有内核之间共享.或者在 65nm Core2Quad(一个封装中的两个 core2duo 芯片)上,每对内核都有自己的最后一级缓存,无法高效通信.

现代主流 Intel CPU(从第一代 i7 CPU,Nehalem 开始)使用 3 级缓存.

  • 32kiB 拆分 L1i/L1d:每个内核私有(与早期的 Intel 相同)
  • 256kiB 统一 L2:每核私有.(Skylake-avx512 上 1MiB).
  • 大型统一 L3:在所有内核之间共享

最后一级缓存是一个大型共享 L3.它在核心之间物理分布,L3 切片与连接核心的环形总线上的每个核心一起使用.通常每个内核都有 1.5 到 2.25MB 的三级缓存,因此多核 Xeon 可能在其所有内核之间共享一个 36MB 的三级缓存.这就是为什么双核芯片有 2 到 4 MB 的 L3,而四核芯片有 6 到 8 MB.

在 Skylake-avx512 以外的 CPU 上,L3包含每个核心的私有缓存,因此它的标签可以用作窥探过滤器,以避免向所有核心广播请求.即缓存在私有 L1d、L1i 或 L2 中的任何内容也必须在 L3 中分配.请参阅 英特尔核心中使用了哪种缓存映射技术i7 处理器?

David Kanter 的 Sandybridge 文章 有一个很好的记忆层次图/系统架构,显示了每核缓存及其与共享 L3 的连接,以及连接到该 L3 的 DDR3/DMI(芯片组)/PCIe.(这仍然适用于 Haswell/Skylake-client/Coffee Lake,但后来的 CPU 中使用 DDR4 除外).

<块引用>

一个处理器/内核能否访问彼此的高速缓存,因为如果他们被允许访问彼此的缓存,那么我相信那里可能是较少的缓存未命中,在这种情况下,如果该特定处理器缓存没有一些数据,但有一些其他的秒处理器的缓存可能有它,从而避免从内存读取第一个处理器的缓存?这个假设是否有效且正确?

  1. 没有.每个 CPU 内核的 L1 高速缓存都紧密集成到该内核中.访问相同数据的多个内核将在各自的 L1d 缓存中拥有自己的副本,非常靠近加载/存储执行单元.

    多级缓存的全部意义在于,单个缓存对于非常热的数据来说不够快,但对于仍然定期访问的不经常使用的数据来说就不够大了.为什么大部分处理器的L1缓存比L2缓存小?

    在英特尔当前的 CPU 中,离核到另一个内核的缓存不会比只到 L3 更快.或者,与仅构建更大/更快的 L3 缓存相比,实现这一点所需的核心之间的网状网络将是令人望而却步的.

    其他内核内置的小型/快速缓存可用于加速这些内核.与其他提高缓存命中率的方法相比,直接共享它们可能会花费更多的功率(甚至可能需要更多的晶体管/裸片面积).(与晶体管数量或芯片面积相比,功耗是一个更大的限制因素.这就是现代 CPU 能够负担得起大型私有 L2 缓存的原因.

    另外,您不希望其他内核污染可能缓存与 this 内核相关的内容的小型私有缓存.

<块引用>

允许任何处理器访问其他处理器会不会有任何问题?处理器的缓存?

  1. 是的——根本没有电线将各种 CPU 缓存连接到其他内核.如果一个核心想要访问另一个核心缓存中的数据,那么它唯一可以访问的数据路径就是系统总线.

一个非常重要的相关问题是缓存一致性问题.考虑以下情况:假设一个 CPU 内核在其缓存中有一个特定的内存位置,并且它写入该内存位置.然后,另一个内核读取该内存位置.您如何确保第二个核心看到更新的值?这就是缓存一致性问题.

正常的解决方案是MESI 协议,或者它的变体.英特尔使用 MESIF.

I have a few questions regarding Cache memories used in Multicore CPUs or Multiprocessor systems. (Although not directly related to programming, it has many repercussions while one writes software for multicore processors/multiprocessors systems, hence asking here!)

  1. In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

  2. Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

  3. Will there be any problems in allowing any processor to access other processor's cache memory?

解决方案

In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

  1. Yes. It varies by the exact chip model, but the most common design is for each CPU core to have its own private L1 data and instruction caches.

    On old and/or low-power CPUs, the next level of cache is typically a L2 unified cache is typically shared between all cores. Or on 65nm Core2Quad (which was two core2duo dies in one package), each pair of cores had their own last-level cache and couldn't communicate as efficiently.

Modern mainstream Intel CPUs (since the first-gen i7 CPUs, Nehalem) use 3 levels of cache.

  • 32kiB split L1i/L1d: private per-core (same as earlier Intel)
  • 256kiB unified L2: private per-core. (1MiB on Skylake-avx512).
  • large unified L3: shared among all cores

Last-level cache is a a large shared L3. It's physically distributed between cores, with a slice of L3 going with each core on the ring bus that connects the cores. Typically 1.5 to 2.25MB of L3 cache with every core, so a many-core Xeon might have a 36MB L3 cache shared between all its cores. This is why a dual-core chip has 2 to 4 MB of L3, while a quad-core has 6 to 8 MB.

On CPUs other than Skylake-avx512, L3 is inclusive of the per-core private caches so its tags can be used as a snoop filter to avoid broadcasting requests to all cores. i.e. anything cached in a private L1d, L1i, or L2, must also be allocated in L3. See Which cache mapping technique is used in intel core i7 processor?

David Kanter's Sandybridge write-up has a nice diagram of the memory heirarchy / system architecture, showing the per-core caches and their connection to shared L3, and DDR3 / DMI(chipset) / PCIe connecting to that. (This still applies to Haswell / Skylake-client / Coffee Lake, except with DDR4 in later CPUs).

Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

  1. No. Each CPU core's L1 caches tightly integrate into that core. Multiple cores accessing the same data will each have their own copy of it in their own L1d caches, very close to the load/store execution units.

    The whole point of multiple levels of cache is that a single cache can't be fast enough for very hot data, but can't be big enough for less-frequently used data that's still accessed regularly. Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

    Going off-core to another core's caches wouldn't be faster than just going to L3 in Intel's current CPUs. Or the required mesh network between cores to make this happen would be prohibitive compared to just building a larger / faster L3 cache.

    The small/fast caches built-in to other cores are there to speed up those cores. Sharing them directly would probably cost more power (and maybe even more transistors / die area) than other ways of increasing cache hit rate. (Power is a bigger limiting factor than transistor count or die area. That's why modern CPUs can afford to have large private L2 caches).

    Plus you wouldn't want other cores polluting the small private cache that's probably caching stuff relevant to this core.

Will there be any problems in allowing any processor to access other processor's cache memory?

  1. Yes -- there simply aren't wires connecting the various CPU caches to the other cores. If a core wants to access data in another core's cache, the only data path through which it can do so is the system bus.

A very important related issue is the cache coherency problem. Consider the following: suppose one CPU core has a particular memory location in its cache, and it writes to that memory location. Then, another core reads that memory location. How do you ensure that the second core sees the updated value? That is the cache coherency problem.

The normal solution is the MESI protocol, or a variation on it. Intel uses MESIF.

这篇关于多核 Intel CPU 中的高速缓存如何共享?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆