如何在多核Intel CPU中共享高速缓存? [英] How are cache memories shared in multicore Intel CPUs?

查看:691
本文介绍了如何在多核Intel CPU中共享高速缓存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对多核CPU或多处理器系统中使用的高速缓存有一些疑问. (尽管与编程没有直接关系,但是它在为多核处理器/多处理器系统编写软件时会产生很多影响,因此请在这里询问!)

I have a few questions regarding Cache memories used in Multicore CPUs or Multiprocessor systems. (Although not directly related to programming, it has many repercussions while one writes software for multicore processors/multiprocessors systems, hence asking here!)

  1. 在多处理器系统或多核处理器(Intel Quad Core,Core两个Duo等)中,每个cpu核/处理器是否都有自己的缓存(数据和程序缓存)?

  1. In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

一个处理器/核心可以访问彼此的缓存,因为如果允许它们访问彼此的缓存,那么在这种情况下,如果该特定处理器的缓存没有一些数据,但是其他一些第二处理器的缓存可能具有该数据,从而避免了从内存中读取第一处理器的缓存?这个假设是正确的吗?

Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

允许任何处理器访问其他处理器的缓存是否会出现问题?

Will there be any problems in allowing any processor to access other processor's cache memory?

推荐答案

在多处理器系统或多核处理器(英特尔四核, 核心两个Duo等.)每个cpu核心/处理器是否都有自己的缓存 内存(数据和程序缓存)?

In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

  1. 是的.不同的芯片型号会有所不同,但是最常见的设计是每个CPU内核都有自己的私有L1数据和指令缓存.

  1. Yes. It varies by the exact chip model, but the most common design is for each CPU core to have its own private L1 data and instruction caches.

在旧的和/或低功率的CPU上,下一级缓存通常是L2统一缓存,通常在所有内核之间共享.或者在65nm Core2Quad(一个封装中有两个core2duo芯片)上,每对内核都有自己的最后一级缓存,因此无法高效通信.

On old and/or low-power CPUs, the next level of cache is typically a L2 unified cache is typically shared between all cores. Or on 65nm Core2Quad (which was two core2duo dies in one package), each pair of cores had their own last-level cache and couldn't communicate as efficiently.

现代主流Intel CPU(自第一代i7 CPU Nehalem起)使用3级缓存.

Modern mainstream Intel CPUs (since the first-gen i7 CPUs, Nehalem) use 3 levels of cache.

  • 32kiB拆分的L1i/L1d:专用每核(与早期的Intel相同)
  • 256kiB统一L2:每核专用. (Skylake-avx512上为1MiB).
  • 大型统一L3:在所有内核之间共享

上级缓存是一个大型的共享L3.它物理上分布在内核之间,在连接内核的环形总线上,每个L3都有一个L3切片.通常每个内核具有1.5至2.25MB的L3缓存,因此多核Xeon可能在其所有内核之间共享36MB的L3缓存.这就是为什么双核芯片具有2到4 MB的L3,而四核芯片具有6到8 MB的原因.

Last-level cache is a a large shared L3. It's physically distributed between cores, with a slice of L3 going with each core on the ring bus that connects the cores. Typically 1.5 to 2.25MB of L3 cache with every core, so a many-core Xeon might have a 36MB L3 cache shared between all its cores. This is why a dual-core chip has 2 to 4 MB of L3, while a quad-core has 6 to 8 MB.

在除Skylake-avx512以外的CPU上,L3是每个内核专用缓存的 ,因此其标记可用作探听过滤器,以避免向所有内核广播请求.即,在专用L1d,L1i或L2中缓存的所有内容也必须在L3中进行分配.请参阅在Intel Core中使用了哪种缓存映射技术i7处理器?

On CPUs other than Skylake-avx512, L3 is inclusive of the per-core private caches so its tags can be used as a snoop filter to avoid broadcasting requests to all cores. i.e. anything cached in a private L1d, L1i, or L2, must also be allocated in L3. See Which cache mapping technique is used in intel core i7 processor?

大卫·坎特(David Kanter)的桑迪布里奇(Sandybridge)文章有一个很好的存储器层次结构图/系统架构,显示了每核高速缓存及其与共享L3的连接,以及与之相连的DDR3/DMI(芯片组)/PCIe. (这仍然适用于Haswell/Skylake-client/Coffee Lake,但在较新的CPU中使用DDR4除外).

David Kanter's Sandybridge write-up has a nice diagram of the memory heirarchy / system architecture, showing the per-core caches and their connection to shared L3, and DDR3 / DMI(chipset) / PCIe connecting to that. (This still applies to Haswell / Skylake-client / Coffee Lake, except with DDR4 in later CPUs).

一个处理器/内核可以访问彼此的缓存,因为如果 他们被允许访问彼此的缓存,然后我相信 在这种情况下,可能会减少较少的高速缓存未命中 处理器缓存没有一些数据,但还有一些其他的数据 处理器的缓存可能具有缓存,因此避免了从内存读取到 第一处理器的缓存?这个假设是正确的吗?

Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

  1. 否.每个CPU内核的L1高速缓存都紧密集成到该内核中.多个访问同一数据的内核将在各自的L1d缓存中拥有各自的副本,这些副本非常靠近加载/存储执行单元.

  1. No. Each CPU core's L1 caches tightly integrate into that core. Multiple cores accessing the same data will each have their own copy of it in their own L1d caches, very close to the load/store execution units.

多级高速缓存的全部意义在于,单个高速缓存不能足够快地存储非常热的数据,但是又不能足够大而不能存储经常使用的不经常使用的数据.

The whole point of multiple levels of cache is that a single cache can't be fast enough for very hot data, but can't be big enough for less-frequently used data that's still accessed regularly. Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

在内核外转到另一个内核的缓存不会比仅在Intel当前的CPU中转到L3快.否则,与仅构建更大/更快的L3缓存相比,实现这一目标所需的核心之间的网状网络将是令人望而却步的.

Going off-core to another core's caches wouldn't be faster than just going to L3 in Intel's current CPUs. Or the required mesh network between cores to make this happen would be prohibitive compared to just building a larger / faster L3 cache.

其他内核内置的小型/快速缓存可以加快这些内核的速度.与其他增加高速缓存命中率的方法相比,直接共享它们可能会花费更多的功率(甚至可能需要更多的晶体管/管芯面积). (功耗是比晶体管数量或管芯面积更大的限制因素.这就是为什么现代CPU可以负担得起大型私有L2高速缓存的原因.

The small/fast caches built-in to other cores are there to speed up those cores. Sharing them directly would probably cost more power (and maybe even more transistors / die area) than other ways of increasing cache hit rate. (Power is a bigger limiting factor than transistor count or die area. That's why modern CPUs can afford to have large private L2 caches).

另外,您不希望其他内核污染小型专用缓存,该小型专用缓存可能会缓存与 this 内核相关的内容.

Plus you wouldn't want other cores polluting the small private cache that's probably caching stuff relevant to this core.

允许任何处理器访问其他处理器会出现任何问题 处理器的缓存?

Will there be any problems in allowing any processor to access other processor's cache memory?

  1. 是的-根本没有电线将各种CPU缓存连接到其他内核.如果一个内核要访问另一个内核的缓存中的数据,则唯一可以通过其访问的数据路径是系统总线.

一个非常重要的相关问题是缓存一致性问题.考虑以下情况:假设一个CPU内核在其高速缓存中具有特定的内存位置,并且将其写入该内存位置.然后,另一个内核读取该内存位置.您如何确保第二个核心看到更新后的值?那就是缓存一致性问题.

A very important related issue is the cache coherency problem. Consider the following: suppose one CPU core has a particular memory location in its cache, and it writes to that memory location. Then, another core reads that memory location. How do you ensure that the second core sees the updated value? That is the cache coherency problem.

通常的解决方案是 MESI协议,或者是其变体. 英特尔使用MESIF .

The normal solution is the MESI protocol, or a variation on it. Intel uses MESIF.

这篇关于如何在多核Intel CPU中共享高速缓存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆