虚拟地址缓存 [英] Virtually addressed Cache

查看:72
本文介绍了虚拟地址缓存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在虚拟寻址的缓存体系结构中,关联性和页面大小如何限制缓存大小?

How does the associativity and page size constrain the Cache size in virtually addressed cache architecture?

特别是我正在寻找以下语句的示例:

如果C≤(page_size x关联性),则缓存索引位仅来自页面偏移量
(虚拟地址和物理地址相同)。

Particularly I am looking for an example on the following statement:
If C≤(page_size x associativity), the cache index bits come only from page offset (same in Virtual address and Physical address).

推荐答案

正是出于这个原因,英特尔CPU多年来一直使用带有64B线的8路关联32kiB L1D。页面为4k,因此页面偏移量为12位,与高速缓存行中组成索引和偏移量的位数完全相同。

Intel CPUs have used 8-way associative 32kiB L1D with 64B lines for many years, for exactly this reason. Pages are 4k, so the page offset is 12 bits, exactly the same number of bits that make up the index and offset within a cache line.

另请参见 L1使用了速度技巧,如果速度更大,该技巧将不起作用 此答案中的段落,详细介绍了如何使用缓存避免使用PIPT缓存之类的别名问题,但仍要与VIPT缓存一样快。

See the "L1 also uses speed tricks that wouldn't work if it was larger" paragraph in this answer for more details about how it lets the cache avoid aliasing problems like a PIPT cache, but still be as fast as a VIPT cache.

这种想法是,页面偏移量以下的虚拟地址位已经是物理地址位。因此,以这种方式工作的VIPT缓存更像是具有索引位的免费转换的PIPT缓存。

The idea is that the virtual address bits below the page offset are already physical address bits. So a VIPT cache that works this way is more like a PIPT cache with free translation of the index bits.

这篇关于虚拟地址缓存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆