什么是“缓存友好的"缓存?代码? [英] What is a "cache-friendly" code?

查看:89
本文介绍了什么是“缓存友好的"缓存?代码?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

"缓存不友好的代码"和"缓存友好的"代码有什么区别?

What is the difference between "cache unfriendly code" and the "cache friendly" code?

如何确保编写高效缓存代码?

How can I make sure I write cache-efficient code?

推荐答案

初步

在现代计算机上,只有最低级别的内存结构(寄存器)才能在单个时钟周期内移动数据.但是,寄存器非常昂贵,并且大多数计算机内核只有不到几十个寄存器(总共几百个到上千个 bytes ).在内存频谱的另一端( DRAM ),内存非常便宜(即,字面价格便宜了几百万倍),但是在请求接收内存之后需要花费数百个周期.数据.为了弥补超快速和昂贵以及超慢和廉价之间的差距,高速缓存被命名为L1,L2,L3,以降低速度和成本.这个想法是,大多数执行代码经常会命中一小组变量,而其余部分(一组更大的变量)则很少.如果处理器无法在L1高速缓存中找到数据,那么它将在L2高速缓存中查找.如果不存在,则为L3高速缓存;如果不存在,则为主内存.这些遗漏"中的每一个在时间上都是昂贵的.

Preliminaries

On modern computers, only the lowest level memory structures (the registers) can move data around in single clock cycles. However, registers are very expensive and most computer cores have less than a few dozen registers (few hundred to maybe a thousand bytes total). At the other end of the memory spectrum (DRAM), the memory is very cheap (i.e. literally millions of times cheaper) but takes hundreds of cycles after a request to receive the data. To bridge this gap between super fast and expensive and super slow and cheap are the cache memories, named L1, L2, L3 in decreasing speed and cost. The idea is that most of the executing code will be hitting a small set of variables often, and the rest (a much larger set of variables) infrequently. If the processor can't find the data in L1 cache, then it looks in L2 cache. If not there, then L3 cache, and if not there, main memory. Each of these "misses" is expensive in time.

(类比缓存是系统内存,因为系统内存太硬盘存储.硬盘存储非常便宜,但速度很慢).

(The analogy is cache memory is to system memory, as system memory is too hard disk storage. Hard disk storage is super cheap but very slow).

缓存是减少 latency 影响的主要方法之一.解读Herb Sutter(下面的链接):增加带宽很容易,但是我们不能通过延迟来购买.

Caching is one of the main methods to reduce the impact of latency. To paraphrase Herb Sutter (cfr. links below): increasing bandwidth is easy, but we can't buy our way out of latency.

始终通过内存层次结构检索数据(最小==最快到最慢). 高速缓存命中/未命中通常是指CPU最高级别的高速缓存中的命中/未命中-最高水平是指最大==最慢.高速缓存命中率对于性能至关重要,因为每次高速缓存未命中都会导致从RAM中获取数据(或更糟的是...),这需要花费 很多 的时间( RAM,用于HDD的数千万个周期).相比之下,从(最高级别)缓存中读取数据通常只需要几个周期.

Data is always retrieved through the memory hierarchy (smallest == fastest to slowest). A cache hit/miss usually refers to a hit/miss in the highest level of cache in the CPU -- by highest level I mean the largest == slowest. The cache hit rate is crucial for performance since every cache miss results in fetching data from RAM (or worse ...) which takes a lot of time (hundreds of cycles for RAM, tens of millions of cycles for HDD). In comparison, reading data from the (highest level) cache typically takes only a handful of cycles.

在现代计算机体系结构中,性能瓶颈正在使CPU消亡(例如访问RAM或更高级别).随着时间的流逝,这只会变得更糟.当前,处理器频率的增加与提高性能不再相关. 问题是内存访问.因此,CPU的硬件设计工作目前主要集中在优化缓存,预取,管道和并发性上.例如,现代的CPU将大约85%的芯片浪费在缓存上,而将99%的芯片用于存储/移动数据!

In modern computer architectures, the performance bottleneck is leaving the CPU die (e.g. accessing RAM or higher). This will only get worse over time. The increase in processor frequency is currently no longer relevant to increase performance. The problem is memory access. Hardware design efforts in CPUs therefore currently focus heavily on optimizing caches, prefetching, pipelines and concurrency. For instance, modern CPUs spend around 85% of die on caches and up to 99% for storing/moving data!

关于这个主题有很多要说的.以下是有关缓存,内存层次结构和正确编程的一些出色参考:

There is quite a lot to be said on the subject. Here are a few great references about caches, memory hierarchies and proper programming:

  • Agner Fog's page. In his excellent documents, you can find detailed examples covering languages ranging from assembly to C++.
  • If you are into videos, I strongly recommend to have a look at Herb Sutter's talk on machine architecture (youtube) (specifically check 12:00 and onwards!).
  • Slides about memory optimization by Christer Ericson (director of technology @ Sony)
  • LWN.net's article "What every programmer should know about memory"

易于缓存的代码的一个非常重要的方面是关于 本地性原理 ,其目的是将相关数据紧密地放在内存中,以实现高效的缓存.就CPU缓存而言,了解缓存行以了解其工作方式非常重要:如何缓存行有效吗?

A very important aspect of cache-friendly code is all about the principle of locality, the goal of which is to place related data close in memory to allow efficient caching. In terms of the CPU cache, it's important to be aware of cache lines to understand how this works: How do cache lines work?

以下特定方面对于优化缓存非常重要:

The following particular aspects are of high importance to optimize caching:

  1. 临时位置:当访问给定的存储位置时,很可能在不久的将来再次访问同一位置.理想情况下,此信息仍会在那时被缓存.
  2. 空间局部性:这是指将相关数据彼此靠近放置.缓存发生在许多级别,而不仅仅是在CPU中.例如,当您从RAM读取数据时,通常会提取比专门要求的更大的内存块,因为程序经常会很快需要该数据. HDD缓存遵循相同的思路.专门针对CPU缓存,缓存行的概念很重要.
  1. Temporal locality: when a given memory location was accessed, it is likely that the same location is accessed again in the near future. Ideally, this information will still be cached at that point.
  2. Spatial locality: this refers to placing related data close to each other. Caching happens on many levels, not just in the CPU. For example, when you read from RAM, typically a larger chunk of memory is fetched than what was specifically asked for because very often the program will require that data soon. HDD caches follow the same line of thought. Specifically for CPU caches, the notion of cache lines is important.

使用适当的容器

Use appropriate c++ containers

std::vectorstd::list. std::vector的元素存储在连续的内存中,因此与访问std::list中的元素相比,访问> 对缓存更友好,而std::list则将其内容存储在各处.这是由于空间局限性.

A simple example of cache-friendly versus cache-unfriendly is c++'s std::vector versus std::list. Elements of a std::vector are stored in contiguous memory, and as such accessing them is much more cache-friendly than accessing elements in a std::list, which stores its content all over the place. This is due to spatial locality.

Bjarne Stroustrup在此youtube片段(感谢@Mohammad Ali Baydoun提供的链接!).

A very nice illustration of this is given by Bjarne Stroustrup in this youtube clip (thanks to @Mohammad Ali Baydoun for the link!).

在数据结构和算法设计中不要忽略缓存

请尽可能调整数据结构和计算顺序,以最大程度地利用缓存.在这方面,一种常见的技术是缓存阻止 ( Archive.org版本),这在高性能计算中至关重要(例如, ATLAS ).

Whenever possible, try to adapt your data structures and order of computations in a way that allows maximum use of the cache. A common technique in this regard is cache blocking (Archive.org version), which is of extreme importance in high-performance computing (cfr. for example ATLAS).

了解并利用数据的隐式结构

该领域的许多人有时会忘记的另一个简单示例是专栏主要的(例如,)与行优先排序(例如的问题)存储二维数组.例如,考虑以下矩阵:

Another simple example, which many people in the field sometimes forget is column-major (ex. fortran,matlab) vs. row-major ordering (ex. c,c++) for storing two dimensional arrays. For example, consider the following matrix:

1 2
3 4

在按行优先的顺序中,它以1 2 3 4的形式存储在内存中;在按列优先的顺序中,该值将存储为1 3 2 4.不难看出,不利用此顺序的实现将很快遇到(很容易避免!)缓存问题.不幸的是,我经常在我的领域(机器学习)中看到类似这样的东西. @MatteoItalia在他的答案中更详细地显示了此示例.

In row-major ordering, this is stored in memory as 1 2 3 4; in column-major ordering, this would be stored as 1 3 2 4. It is easy to see that implementations which do not exploit this ordering will quickly run into (easily avoidable!) cache issues. Unfortunately, I see stuff like this very often in my domain (machine learning). @MatteoItalia showed this example in more detail in his answer.

当从内存中获取矩阵的某个元素时,其附近的元素也将被获取并存储在高速缓存行中.如果利用了排序,这将导致较少的内存访问(因为随后的计算所需的接下来的几个值已经在高速缓存行中).

When fetching a certain element of a matrix from memory, elements near it will be fetched as well and stored in a cache line. If the ordering is exploited, this will result in fewer memory accesses (because the next few values which are needed for subsequent computations are already in a cache line).

为简单起见,假设高速缓存包含一个高速缓存行,其中可以包含2个矩阵元素,并且当从内存中获取给定元素时,下一个也是.假设我们要对上述示例2x2矩阵中的所有元素求和(称为M):

For simplicity, assume the cache comprises a single cache line which can contain 2 matrix elements and that when a given element is fetched from memory, the next one is too. Say we want to take the sum over all elements in the example 2x2 matrix above (lets call it M):

利用排序(例如,首先在

This is explained very well here (thanks to @0x90 for the link): Why is processing a sorted array faster than processing an unsorted array?

避免使用虚函数

virtual方法代表有关缓存未命中的有争议的问题(存在一个普遍的共识,即就性能而言应尽可能避免使用).虚拟函数可能会在查找过程中导致缓存未命中,但这仅在 不经常调用(否则可能会被缓存)的情况下发生,因此某些人将其视为非问题.有关此问题的参考,请查看:

In the context of c++, virtual methods represent a controversial issue with regard to cache misses (a general consensus exists that they should be avoided when possible in terms of performance). Virtual functions can induce cache misses during look up, but this only happens if the specific function is not called often (otherwise it would likely be cached), so this is regarded as a non-issue by some. For reference about this issue, check out: What is the performance cost of having a virtual method in a C++ class?

具有多处理器缓存的现代体系结构中的一个常见问题称为错误共享.当每个单独的处理器试图在另一个内存区域中使用数据并将其存储在同一缓存行中时,就会发生这种情况.这会导致一次又一次地覆盖高速缓存行(其中包含另一个处理器可以使用的数据).实际上,在这种情况下,不同的线程会导致缓存未命中,从而使彼此等待. 另请参见(感谢@Matt提供链接):如何和何时对齐缓存行大小?

A common problem in modern architectures with multiprocessor caches is called false sharing. This occurs when each individual processor is attempting to use data in another memory region and attempts to store it in the same cache line. This causes the cache line -- which contains data another processor can use -- to be overwritten again and again. Effectively, different threads make each other wait by inducing cache misses in this situation. See also (thanks to @Matt for the link): How and when to align to cache line size?

所谓的失败.当进程连续产生需要磁盘访问的页面错误(例如,访问当前页面中不在的内存)时,就会发生这种情况.

An extreme symptom of poor caching in RAM memory (which is probably not what you mean in this context) is so-called thrashing. This occurs when the process continuously generates page faults (e.g. accesses memory which is not in the current page) which require disk access.

这篇关于什么是“缓存友好的"缓存?代码?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆