处理器中的 MMU(Memory management Unit)单元如何保护内存段 [英] How MMU(Memory management Unit) unit in a processor protects the memory segments

查看:40
本文介绍了处理器中的 MMU(Memory management Unit)单元如何保护内存段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在介绍一种嵌入式处理器架构时,我看到了块 MMU,它主要提到了内存保护功能.

While going through one embedded processor architecture, i have seen the block MMU and it is mainly mentioning about the memory protection functionality.

我可以知道吗,

MMU 如何进行这种保护以及为什么需要它?内存保护是什么意思?

How an MMU does this protection and why it is needed? What is mean by memory protection?

除了保护(如虚拟寻址)之外,MMU 的其他用途是什么?

What are the other uses of MMU other than protection(like virtual addressing)?

请考虑没有操作系统的嵌入式系统.

Please consider an embedded system without an OS.

__卡努

推荐答案

对于使用内存的处理器(大多数),有某种内存接口,有些有名称(如 amba、axi、wishbone),有些则有不是.从处理器的角度来看,这是地址和数据,请读取或写入该地址处的内容.在过去的美好时光里,你只有一条总线,你的闪存、内存和外围设备将坐在这条总线上,查看地址的某些(通常是高位)位以确定它们是否被寻址,如果是,则从中读取或跳转否则数据总线保持三态.今天,取决于芯片等,一些内存解码发生在内核中或内核附近,您与内核或芯片的公共接口可能是几条总线,可能有特定的闪存总线,以及特定的 sram 总线和特定的 dram公交车等

For processors that use memory (most of them) there is a memory interface of some sort, some have names (like amba, axi, wishbone), some do not. From a processors perspective this is address, and data and please either read or write what is at that address. In the good old days you would have a single bus and your flash and ram and peripherals would sit on this bus looking at certain (usually upper) bits of the address to determine if they were being addressed and if so then read from or jump on the data bus otherwise remain tristated. Today depending on the chip, etc, some of that memory decoding happens in or close to the core and your public interface to the core or chip might be several busses, there may be a specific flash bus, and a specific sram bus and specific dram bus, etc.

所以第一个问题是扁平线性地址空间,即使分为flash和ram,ram部分也是扁平的,地址0到N-1为N个字节.对于非嵌入式操作系统,如果只有某种方式让程序假设它们都从地址 0 或地址 0x100 或地址 0x8000 开始,而不必以某种方式为下一个空闲内存进行编译,那么为了使人们的生活更轻松空间是,或者操作系统不必在任务切换时将程序完全移出较低的内存并用另一个替换它.一个古老的简单方法是使用 intels segment:offset 方案.程序总是从同一个地方开始,因为在启动程序之前调整了代码段,并且偏移量用于执行(这个模型的非常简化的视图),在程序之间进行任务切换时只需更改代码段,恢复PC下一个节目.一个程序可能位于地址 0x1100,另一个位于 0x8100,但两个程序都认为它们位于地址 0x0100.对所有开发人员来说都很容易.MMU 通过在处理器总线上获取该地址并将其称为虚拟地址来提供相同的功能,mmu 通常位于处理器内存接口和芯片/世界其余部分之间的处理器附近.因此,您可以再次让 mmu 看到地址 0x0100 在表中查找并转到物理地址 0x0100,然后在任务切换时更改表,以便下一次 0x0100 获取转到 0x1100.每个程序都认为它在地址 0x0100 上运行,链接、编译、开发、加载和执行代码的痛苦更少.

So the first problem you have with a flat linear address space, even if divided up into flash and ram, the ram portion is flat, address 0 to N-1 for N bytes. For a non-embedded operating system, to make peoples lives easier if there were only some way for programs to assume they were all starting at address 0 or address 0x100 or address 0x8000, instead of having to be somehow compiled for whatever the next free memory space is, or for the operating system to not have to completely move a program out of lower memory and replace it with another whenever task switching. An old easy way was to use intels segment:offset scheme. Programs always started at the same place because the code segment was adjusted before launching the program and the offset was used for executing (very simplified view of this model), when task switching among programs you just change the code segment, restore the pc for the next program. One program could be at address 0x1100 and another at 0x8100 but both programs think they are at address 0x0100. Easy for all the developers. MMUs provide the same functionality by taking that address on the processor bus and calling it a virtual address, the mmu normally sits up close to the processor between the processors memory interface and the rest of the chip/world. So you could again have the mmu see address 0x0100 look that up in a table and go to physical address 0x0100, then when you task switch you change the table so the next fetch of 0x0100 goes to 0x1100. Each program thinks it is operating at address 0x0100, linking, compiling, developing, loading and executing code is less painful.

下一个功能是缓存、内存保护等.因此处理器及其内存控制器可能会在到达 mmu 之前解码一些地址,也许是某些核心寄存器,也许是 mmu 控制自己.但是其他诸如内存和外围设备之类的东西可能会在 mmu 的另一侧寻址,在缓存的另一侧,通常是 mmu 之外的洋葱的下一层.例如,当轮询您的串行端口以查看是否有另一个可用字节时,您不希望数据访问被缓存,以便串行端口状态寄存器的第一次读取实际上在物理总线上退出并接触串行端口,然后所有后续读取读取缓存中的陈旧版本.您确实希望将其用于 ram 值,即缓存的目的,但对于状态寄存器之类的易失性事物,这非常糟糕.因此,根据您的系统,在启用 mmu 之前,您可能无法打开数据缓存.例如,ARM 上的内存接口具有指示访问类型的控制位,它是不可缓存的访问还是可缓存的,突发的一部分,诸如此类.因此,您可以启用独立于数据缓存的指令缓存,并且在没有 mmu 的情况下,它将直接将这些控制信号传递到缓存控制器,然后缓存控制器连接到外部世界(如果它没有处理事务).因此,您的指令提取可以缓存所有其他未缓存的内容.但是要缓存数据 ram 访问而不是来自串行端口的状态寄存器,您需要做的是为 mmu 设置表,在您的嵌入式环境中,您可以选择简单地将 ram 一对一映射,这意味着地址 0x1000 虚拟变为 0x1000 物理,但您现在可以为该内存块启用数据缓存位.然后对于您的串行端口,您可以将虚拟地址映射到物理地址,但您可以清除该内存空间块的数据缓存启用位.现在您可以启用数据缓存,内存读取现在被缓存(因为它们通过 mmu 时的控制信号被标记为这样,但是对于您的寄存器访问,控制信号指示不可缓存).

The next feature is caching, memory protection, etc. So the processor and its memory controller may decode some addresses before reaching the mmu, perhaps certain core registers and perhaps the mmu controls themselves. But other things like memory and peripherals may be addressed on the other side of the mmu, on the other side of the cache which is often the next layer of the onion outside the mmu. When polling your serial port for example to see if there is another byte available you dont want the data access to be cached such that the first read of the serial port status register actually goes out on the physical bus and touches the serial port, then all subsequent reads read the stale version in the cache. You do want this for ram values, the purpose of the cache, but for volatile things like status registers this is very bad. So depending on your system you are likely not able to turn on the data cache until the mmu is enabled. The memory interface on an ARM for example has control bits that indicate what type of access it is, is it a non-cacheable access a cacheable, part of a burst, that sort of thing. So you can enable instruction caching independent of data caching and without the mmu on it will pass these control signals straight on through to the cache controller which then is connected to the outside world (if it didnt handle the transaction). So your instruction fetch can be cached everything else not cached. But to cache data ram accesses but not status registers from the serial port what you need to do is setup the tables for the mmu and in your embedded environment you may choose to simply map the ram one to one, meaning address 0x1000 virtual becomes 0x1000 physical, but you can now enable the data cache bit for that chunk of memory. Then for your serial port you can map virtual to physical addresses but you clear the data cache enable bit for that chunk of memory space. Now you can enable the data cache, memory reads are now cached (because the control signals as they pass through the mmu are marked as such, but for your register access the control signals indicate non-cacheable).

您当然不必一对一地将虚拟映射到物理,这取决于嵌入式与非嵌入式、操作系统与否等.但这就是您的保护发挥作用的地方.在操作系统中最容易看到.应用层的应用程序不应被允许访问受保护的系统内存、内核等.不应能够破坏其他应用程序的内存空间.所以当应用程序被切入时,mmu 表反映了它允许访问的内存和不允许访问的内存.程序不允许的任何地址都被 mmu 捕获,生成异常/故障(中断),内核/主管获得控制权并可以处理该程序.您可能还记得早期 windows 时代的一般保护故障"一词,在营销和公司其他利益集团决定我们应该更改名称之前,它直接来自英特尔手册,当您出现故障时会触发中断不属于其他类别,例如测试中的多项选择题 A bob, B ted, C alice, D 以上都不是.一般的保护错误不属于上述类别,但却是最受打击的,因为当您的程序试图访问其分配的内存空间之外的内存或 I/O 时,您会遇到这种情况.

You certainly do not have to map virtual to physical one to one, depends on embedded or not embedded, operating system or not, etc. But this is where your protection comes in. Easiest to see in an operating system. An application at the application layer should not be allowed to get at protected system memory, the kernel, etc. Should not be able to clobber fellow applications memory space. So when the application is switched in, the mmu tables reflect what memory it is allowed to access and what memory it is not allowed to access. Any address not permitted by the program is caught by the mmu, an exception/fault (interrupt) is generated and the kernel/supervisor gets control and can deal with that program. You may remember the term "general protection fault" from the earlier windows days, before marketing and other interest groups in the company decided we should change the name, it was straight out of the intel manual, that interrupt was fired when you had a fault that didnt fall into other categories, like a multiple choice question on a test A bob, B ted, C alice, D none of the above. The general protection fault was the none of the above categetory, yet the most widely hit because that is what you got when your program tried to access memory or i/o outside its allocated memory space.

mmus 的另一个好处是 malloc.在 mmus 之前,内存分配必须使用方案来重新排列内存以在中间保留大的空块.对于下一个大的 malloc,最小化有 4meg 免费为什么我的 1kbyte alloc 失败?".现在,就像磁盘一样,您将内存空间切成这些 4kbyte 或一些此类大小的块.一个大小不超过一个块的 malloc,获取内存中的任何空闲块,使用 mmu 表条目指向它,并为调用者提供与该 mmu 条目相关联的虚拟地址.你想要 4096*10 字节,诀窍不是必须找到那么多线性内存,而是找到 10 个线性 mmu 表条目,取任意 10 个内存块(不一定相邻)并将它们的物理地址放在 10 mmu 条目中.

Another benefit from mmus is malloc. Before mmus the memory alloc had to use schemes to re-arrange memory to keep large empty blocks in the middle. for that next big malloc, to minimize the "with 4meg free why did my 1kbyte alloc fail?". Now, like a disk, you chop memory space up into these 4kbyte or some such size chunks. A malloc that is one chunk or less in size, take any free chunk in memory use an mmu table entry to point at it and give the caller the virtual address tied to that mmu entry. You want 4096*10 bytes, the trick is not having to find that much linear memory but finding 10 linear mmu table entries, take any 10 chunks of memory (not neccesarily adjacent) and put their physical addresses in the 10 mmu entries.

最重要的是,它的方式"是它通常位于处理器和缓存之间,或者如果没有缓存物理内存总线.mmu 逻辑查看地址,使用它来查看表格.表中的位包括物理地址加上一些包括可缓存的控制信号,加上一些指示这是有效条目还是受保护区域的方式.如果该地址受到保护,则 mmu 会向内核发送一个中断/事件.如果有效,它将修改虚拟地址以成为 mmu 另一侧/外部的物理地址,并且诸如可缓存位之类的位用于告诉 mmu 另一侧的任何内容,这是什么类型的事务,指令,数据,可缓存、突发等.对于嵌入式、非操作系统、单任务系统,您可能只需要一个 mmu 表.例如,在操作系统中执行保护的一种快速方法是为每个应用程序或表的子集(类似于目录结构的树)设置一个表,这样当您切换任务时,您只需更改一件事,表的开始或树的一个分支的开始,以将虚拟地址更改为物理地址并为树的该分支分配内存(保护).

The bottom line, "how" it does it is that it sits usually between the processor and the cache or if no cache the physical memory bus. The mmu logic looks at the address, uses that to look into a table. The bits in the table include the physical address plus some control signals which include cacheable, plus some way of indicating if this is a valid entry or a protected region. If that address is protected the mmu fires an interrupt/event back to the core. If valid it modifies the virtual address to become the physical address on the other/outside of the mmu and bits like the cacheable bit are used to tell whatever is on the other side of the mmu what type of transaction this is, instruction, data, cacheable, burst, etc. For an embedded, non-os, single tasking system you may only need a single mmu table. A quick way in an operating system to perform protection for example would be to have a table per application or a subset of the table (which tree like similar to a directory structure) such that when you task switch you only have to change one thing, the start of the table or the start of one branch of the tree to change the virtual to physical addresses and allocated memory (protection) for that branch of the tree.

这篇关于处理器中的 MMU(Memory management Unit)单元如何保护内存段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆