为什么缓存位置对阵列性能很重要? [英] Why does cache locality matter for array performance?

查看:23
本文介绍了为什么缓存位置对阵列性能很重要?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在下面的博客中有关于数组优于链表的声明:

In the following blog there is a statement about the advantage of arrays over linked lists:

数组具有更好的缓存局部性,可以显着提高性能.

Arrays have better cache locality that can make a pretty big difference in performance.

这是什么意思?我不明白缓存局部性如何提供巨大的性能优势.

What does that mean? I don't understand how cache locality can provide a huge performance benefit.

推荐答案

查看我的回答关于空间和时间局部性.

特别是,数组是连续的内存块,所以它们的大块将在第一次访问时加载到缓存中.这使得访问数组的未来元素相对较快.另一方面,链表不一定位于连续的内存块中,并且可能导致更多缓存未命中,从而增加访问它们所需的时间.

In particular, arrays are contiguous memory blocks, so large chunks of them will be loaded into the cache upon first access. This makes it comparatively quick to access future elements of the array. Linked lists on the other hand aren't necessarily in contiguous blocks of memory, and could lead to more cache misses, which increases the time it takes to access them.

考虑以下可能的内存布局,用于数组 data 和链表 l_data 大型结构

Consider the following possible memory layouts for an array data and linked list l_data of large structs

Address      Contents       | Address      Contents
ffff 0000    data[0]        | ffff 1000    l_data
ffff 0040    data[1]        |   ....
ffff 0080    data[2]        | ffff 3460    l_data->next
ffff 00c0    data[3]        |   ....
ffff 0100    data[4]        | ffff 8dc0    l_data->next->next
                            | ffff 8e00    l_data->next->next->next
                            |   ....
                            | ffff 8f00    l_data->next->next->next->next

如果我们想遍历这个数组,第一次访问ffff 0000 将需要我们去内存进行检索(CPU 周期中的一个非常慢的操作).然而,在第一次访问之后,数组的其余部分将在缓存中,随后的访问会快得多.使用链表,第一次访问 ffff 1000 也需要我们去内存.不幸的是,处理器将缓存直接围绕该位置的内存,一直到 ffff 2000.如您所见,这实际上并没有捕获列表的任何其他元素,这意味着当我们访问 l_data->next 时,我们将再次访问内存.

If we wanted to loop through this array, the first access to ffff 0000 would require us to go to memory to retrieve (a very slow operation in CPU cycles). However, after the first access the rest of the array would be in the cache, and subsequent accesses would be much quicker. With the linked list, the first access to ffff 1000 would also require us to go to memory. Unfortunately, the processor will cache the memory directly surrounding this location, say all the way up to ffff 2000. As you can see, this doesn't actually capture any of the other elements of the list, which means that when we go to access l_data->next, we will again have to go to memory.

这篇关于为什么缓存位置对阵列性能很重要?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆