多核 Intel CPU 中如何共享高速缓存? [英] How are cache memories shared in multicore Intel CPUs?

查看:34
本文介绍了多核 Intel CPU 中如何共享高速缓存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有几个关于多核 CPU 或多处理器系统中使用的缓存的问题.(虽然与编程没有直接关系,但在为多核处理器/多处理器系统编写软件时会产生很多影响,因此在这里提问!)

  1. 在多处理器系统或多核处理器(Intel Quad Core、Core two Duo 等)中,每个 cpu 内核/处理器是否都有自己的缓存(数据和程序缓存)?

    莉>
  2. 一个处理器/内核是否可以访问彼此的缓存,因为如果允许它们访问彼此的缓存,那么我相信在这种情况下,如果该特定处理器缓存没有一些数据但其他一些第二个处理器的缓存可能有它从而避免从内存读取到第一个处理器的缓存?这个假设有效吗?

  3. 允许任何处理器访问其他处理器的高速缓存会不会有问题?

解决方案

在多处理器系统或多核处理器(英特尔四核、Core two Duo 等..) 每个 cpu 核心/处理器是否都有自己的缓存内存(数据和程序缓存)?

  1. 是的.它因具体的芯片型号而异,但最常见的设计是每个 CPU 内核都有自己的私有 L1 数据和指令缓存.

    在旧的和/或低功耗 CPU 上,下一级缓存通常是 L2 统一缓存,通常在所有内核之间共享.或者在 65 纳米 Core2Quad(一个封装中的两个 core2duo 芯片)上,每对内核都有自己的最后一级缓存,无法高效通信.

现代主流 Intel CPU(自第一代 i7 CPU Nehalem 起)使​​用 3 级缓存.

  • 32kiB 拆分 L1i/L1d:每核私有(与早期的英特尔相同)
  • 256kiB 统一 L2:每核私有.(Skylake-avx512 上 1MiB).
  • 大型统一 L3:在所有内核之间共享

最后一级缓存是一个大型共享 L3.它在物理上分布在内核之间,L3 切片与连接内核的环形总线上的每个内核相连.通常每个内核有 1.5 到 2.25MB 的 L3 缓存,因此众核 Xeon 可能会在其所有内核之间共享 36MB 的 L3 缓存.这就是为什么双核芯片有 2 到 4 MB 的 L3,而四核芯片有 6 到 8 MB.

在除 Skylake-avx512 之外的 CPU 上,L3 包括每个核心的私有缓存,因此其标签可用作监听过滤器,以避免向所有核心广播请求.即缓存在私有 L1d、L1i 或 L2 中的任何内容也必须在 L3 中分配.请参阅英特尔核心中使用了哪种缓存映射技术i7处理器?

David Kanter 的 Sandybridge 文章 有一个很好的内存层次图/系统架构,显示每核缓存及其与共享 L3 的连接,以及连接到该缓存的 DDR3/DMI(芯片组)/PCIe.(这仍然适用于 Haswell/Skylake-client/Coffee Lake,但后来的 CPU 中的 DDR4 除外).

<块引用>

一个处理器/内核可以访问彼此的缓存,因为如果他们被允许访问彼此的缓存,然后我相信那里可能是较少的缓存未命中,在这种情况下,如果该特定处理器缓存没有一些数据,而是其他一些数据处理器的缓存可能有它,从而避免从内存读取到第一个处理器的缓存?这个假设有效吗?

  1. 没有.每个 CPU 内核的 L1 缓存都紧密集成到该内核中.访问相同数据的多个内核将在自己的 L1d 缓存中拥有自己的副本,非常靠近加载/存储执行单元.

    多级缓存的全部意义在于,单个缓存对于非常热的数据来说不够快,但对于仍然经常访问的不常使用的数据来说也不够大.为什么在大多数处理器中 L1 缓存的大小小于 L2 缓存的大小?

    离开内核到另一个内核的缓存不会比在英特尔当前的 CPU 中转到 L3 快.或者,与仅构建更大/更快的 L3 缓存相比,实现这一目标所需的内核之间的网状网络将令人望而却步.

    其他内核内置的小型/快速缓存可加快这些内核的速度.与其他提高缓存命中率的方法相比,直接共享它们可能会消耗更多的功率(甚至可能更多的晶体管/芯片面积).(与晶体管数量或芯片面积相比,功耗是一个更大的限制因素.这就是现代 CPU 能够负担得起大型私有 L2 缓存的原因.

    另外,您不希望其他内核污染可能正在缓存与这个内核相关的内容的小型私有缓存.

<块引用>

允许任何处理器访问其他处理器会不会有任何问题?处理器的缓存?

  1. 是的——根本就没有将各种 CPU 缓存连接到其他内核的电线.如果一个内核想要访问另一个内核缓存中的数据,那么它唯一可以访问的数据路径就是系统总线.

一个非常重要的相关问题是缓存一致性问题.考虑以下情况:假设一个 CPU 内核在其缓存中有一个特定的内存位置,并且它会写入该内存位置.然后,另一个核心读取该内存位置.你如何确保第二个核心看到更新的值?这就是缓存一致性问题.

通常的解决方案是MESI 协议,或其变体.英特尔使用 MESIF.

I have a few questions regarding Cache memories used in Multicore CPUs or Multiprocessor systems. (Although not directly related to programming, it has many repercussions while one writes software for multicore processors/multiprocessors systems, hence asking here!)

  1. In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

  2. Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

  3. Will there be any problems in allowing any processor to access other processor's cache memory?

解决方案

In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

  1. Yes. It varies by the exact chip model, but the most common design is for each CPU core to have its own private L1 data and instruction caches.

    On old and/or low-power CPUs, the next level of cache is typically a L2 unified cache is typically shared between all cores. Or on 65nm Core2Quad (which was two core2duo dies in one package), each pair of cores had their own last-level cache and couldn't communicate as efficiently.

Modern mainstream Intel CPUs (since the first-gen i7 CPUs, Nehalem) use 3 levels of cache.

  • 32kiB split L1i/L1d: private per-core (same as earlier Intel)
  • 256kiB unified L2: private per-core. (1MiB on Skylake-avx512).
  • large unified L3: shared among all cores

Last-level cache is a a large shared L3. It's physically distributed between cores, with a slice of L3 going with each core on the ring bus that connects the cores. Typically 1.5 to 2.25MB of L3 cache with every core, so a many-core Xeon might have a 36MB L3 cache shared between all its cores. This is why a dual-core chip has 2 to 4 MB of L3, while a quad-core has 6 to 8 MB.

On CPUs other than Skylake-avx512, L3 is inclusive of the per-core private caches so its tags can be used as a snoop filter to avoid broadcasting requests to all cores. i.e. anything cached in a private L1d, L1i, or L2, must also be allocated in L3. See Which cache mapping technique is used in intel core i7 processor?

David Kanter's Sandybridge write-up has a nice diagram of the memory heirarchy / system architecture, showing the per-core caches and their connection to shared L3, and DDR3 / DMI(chipset) / PCIe connecting to that. (This still applies to Haswell / Skylake-client / Coffee Lake, except with DDR4 in later CPUs).

Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

  1. No. Each CPU core's L1 caches tightly integrate into that core. Multiple cores accessing the same data will each have their own copy of it in their own L1d caches, very close to the load/store execution units.

    The whole point of multiple levels of cache is that a single cache can't be fast enough for very hot data, but can't be big enough for less-frequently used data that's still accessed regularly. Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

    Going off-core to another core's caches wouldn't be faster than just going to L3 in Intel's current CPUs. Or the required mesh network between cores to make this happen would be prohibitive compared to just building a larger / faster L3 cache.

    The small/fast caches built-in to other cores are there to speed up those cores. Sharing them directly would probably cost more power (and maybe even more transistors / die area) than other ways of increasing cache hit rate. (Power is a bigger limiting factor than transistor count or die area. That's why modern CPUs can afford to have large private L2 caches).

    Plus you wouldn't want other cores polluting the small private cache that's probably caching stuff relevant to this core.

Will there be any problems in allowing any processor to access other processor's cache memory?

  1. Yes -- there simply aren't wires connecting the various CPU caches to the other cores. If a core wants to access data in another core's cache, the only data path through which it can do so is the system bus.

A very important related issue is the cache coherency problem. Consider the following: suppose one CPU core has a particular memory location in its cache, and it writes to that memory location. Then, another core reads that memory location. How do you ensure that the second core sees the updated value? That is the cache coherency problem.

The normal solution is the MESI protocol, or a variation on it. Intel uses MESIF.

这篇关于多核 Intel CPU 中如何共享高速缓存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆