为什么要实施数据缓存和指令缓存以减少遗漏? [英] Why implement data cache and instruction cache to reduce miss?

查看:216
本文介绍了为什么要实施数据缓存和指令缓存以减少遗漏?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在这样的问题上我不败


在内存层次结构的上下文中,为什么要实现数据缓存和
指令

In the context of a memory hierarchy why implement data cache and instruction cache?

我回答说,减少冲突丢失和空间不足丢失的数量很有用。但是可以根据数据和指令的数量来调整数据缓存和指令缓存的大小吗?因为我假设数据数大于指令数(很多时候我们需要2个数据才能执行1条指令),并且数据高速缓存和指令高速缓存的大小根据此数字确定。是真的还是完全错误的?在错误的情况下,为什么要实施数据缓存和指令缓存以减少丢失?

I replied that it is useful to decrease the number of conflict miss and insufficient space miss. But the data cache and the instruction cache can be sized according to the number of data and instruction? Because i assumed that the number of data is higher than the number of instruction (many times we need 2 data to execute 1 instruction) and the data cache and instruction cache is sized according to this numbers. Is true or completely wrong? In the case that it's wrong, why implement data cache and instruction cache to reduce miss?

推荐答案

缓存的思想是以1个周期交付缓存的数据,以保持CPU以最大速度运行。

The idea of a cache is to deliver cached data in 1 cycle to keep the CPU running at maximum speed.

现在,所有CPU均已流水线化。这意味着它们具有独立的模块,例如获取一条指令,对其进行解码,获取操作数,执行该指令,然后回写结果。对于所有不同的指令,所有这些流水线阶段均尽可能在同一时间执行。

为了获得最大速度,必须与之前解码的较早指令的操作数获取同时进行指令获取。如果一个人具有一条指令高速缓存和一个数据高速缓存,则只能在1个周期内同时(在最佳情况下)完成这两者。

Now today all CPUs are pipelined. This means the they have independent modules that e.g. fetch an instruction, decode it, fetch the operands, execute the instruction, and write back the result. All of these pipeline stages are executed whenever possible at the same time for different instructions.
For maximum speed, an instruction fetch has to be done at the same time as an operand fetch of an earlier instruction decoded before. Both can only be done (in the optimal case) at the same time in 1 cycle if one has an instruction cache and a data cache.

这篇关于为什么要实施数据缓存和指令缓存以减少遗漏?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆