MMU(内存管理单元)单元的处理器如何保护内存段 [英] How MMU(Memory management Unit) unit in a processor protects the memory segments

查看:151
本文介绍了MMU(内存管理单元)单元的处理器如何保护内存段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

虽然通过一个嵌入式处理器架构去,我已经看到了块MMU和它主要是提有关内存保护功能。

While going through one embedded processor architecture, i have seen the block MMU and it is mainly mentioning about the memory protection functionality.

可我知道,

一个MMU这是如何保护和为什么需要?
用什么内存保护是什么意思呢?

How an MMU does this protection and why it is needed? What is mean by memory protection?

什么是MMU的保护比其他的其他用途(如虚拟寻址)?

What are the other uses of MMU other than protection(like virtual addressing)?

请考虑嵌入式系统没有安装操作系统。

Please consider an embedded system without an OS.

__卡努

推荐答案

有关使用内存的处理器(大多数)有某种类型的存储器接口,有的有名字(如AMBA,AXI,叉骨),有的做不。从处理器的角度来看,这是地址,数据和请读或写的是在该地址。在过去的好时光,你将有一个单一的总线和您的Flash和RAM和外设会坐在这辆车看地址的某些(通常上部)位,以确定它们是否正在解决中的,如果是这样,然后读取或跳数据总线,否则保持三态。今天取决于芯片等,一些存储器解码发生在或接近芯和你的公共接口到芯或芯片可能几个总线上,可以有一个特定的闪光总线,和一个特定的SRAM总线和特定DRAM公交车等。

For processors that use memory (most of them) there is a memory interface of some sort, some have names (like amba, axi, wishbone), some do not. From a processors perspective this is address, and data and please either read or write what is at that address. In the good old days you would have a single bus and your flash and ram and peripherals would sit on this bus looking at certain (usually upper) bits of the address to determine if they were being addressed and if so then read from or jump on the data bus otherwise remain tristated. Today depending on the chip, etc, some of that memory decoding happens in or close to the core and your public interface to the core or chip might be several busses, there may be a specific flash bus, and a specific sram bus and specific dram bus, etc.

所以,你有一个平坦的线性地址空间,即使分成闪存和RAM的第一个问题,公羊部分是平的,地址0到N-1的N个字节。对于非嵌入式操作系统,使人们的生活更容易,如果有只是某种方式的方案假设他们都起始地址为0或地址为0x100或地址为0x8000,而不必以某种方式编译无论下一个可用内存分别为空间,或为操作系统不必完全移动的程序出较低的存储器,并用另一个每当任务切换替换它。一个老简单的方法是使用英特尔的段:偏移方案。计划总是在同一个地方启动,因为code段已启动程序前调整用于执行(这种模式非常简化的看法),在程序之间进行任务切换,你只需要改变code段偏移,恢复下一个程序的电脑。一个程序可以在地址到0x1100,另一个在为0x8100,但是这两个方案认为他们是在地址为0x0100。为方便所有的开发人员。 MMU的采取处理器总线上的地址,称这是一个虚拟地址提供相同的功能,MMU的通常接近处理器的存储器接口和芯片/世界其他地区之间的处理器坐起来。所以,你可以再有MMU看到的地址为0x0100看,最多在一个表中,去到物理地址为0x0100,那么当你任务切换更改表,以便下次提取的0100去到0x1100。每个程序认为它是在地址为0x0100操作,链接,编译,开发,加载和执行code是那么痛苦。

So the first problem you have with a flat linear address space, even if divided up into flash and ram, the ram portion is flat, address 0 to N-1 for N bytes. For a non-embedded operating system, to make peoples lives easier if there were only some way for programs to assume they were all starting at address 0 or address 0x100 or address 0x8000, instead of having to be somehow compiled for whatever the next free memory space is, or for the operating system to not have to completely move a program out of lower memory and replace it with another whenever task switching. An old easy way was to use intels segment:offset scheme. Programs always started at the same place because the code segment was adjusted before launching the program and the offset was used for executing (very simplified view of this model), when task switching among programs you just change the code segment, restore the pc for the next program. One program could be at address 0x1100 and another at 0x8100 but both programs think they are at address 0x0100. Easy for all the developers. MMUs provide the same functionality by taking that address on the processor bus and calling it a virtual address, the mmu normally sits up close to the processor between the processors memory interface and the rest of the chip/world. So you could again have the mmu see address 0x0100 look that up in a table and go to physical address 0x0100, then when you task switch you change the table so the next fetch of 0x0100 goes to 0x1100. Each program thinks it is operating at address 0x0100, linking, compiling, developing, loading and executing code is less painful.

接下来的特点是高速缓存,内存保护等,因此在处理器和内存控制器可到达MMU,也许某些内核寄存器或许MMU控制自己之前去code一些地址。但其他像存储器和外围设备可以在MMU的另一侧进行处理,在其上往往是MMU外洋葱的下一层缓存的另一侧。当轮询看到你,例如串行端口,如果有你不想要的数据访问缓存使得串行端口状态寄存器的第一次读居然熄灭物理总线上,并接触到串口另一个可用的字节,那么所有的后续的读取读取缓存中的过时版本。你想这个RAM的值,缓存的目的,但对于像挥发性状态寄存器的事情,这是非常糟糕的。所以,这取决于你的系统,你很可能无法直到MMU启用打开数据缓存。就比如一个ARM的存储器接口的控制位,表明它是什么类型的访问,它是一个不可缓存访问的高速缓存,突发的一部分,那种事。这样可以使能指令高速缓存独立数据缓存的,没有在其上的MMU将通过向,然后被连接到外部世界(如果它没有处理事务)高速缓存控制器直传递这些控制信号。所以,你的取指令可以被缓存一切不被缓存。但是,高速缓存数据RAM存取但是从串行端口,你需要做的不是状态寄存器是设置为MMU和在嵌入式环境中的表,你可以选择简单的RAM映射一对一,这意味着地址为0x1000虚拟变为0x1000的物理但你现在可以启用的内存块中的数据缓存位。那么对于串行口则可以映射虚拟地址到物理地址,但你清除数据缓存使能位的内存空间块。现在,您可以启用数据缓存,内存读取现在被高速缓存(因为他们通过MMU控制信号被标出,但你的寄存器访问控制信号指示不可缓​​存)。

The next feature is caching, memory protection, etc. So the processor and its memory controller may decode some addresses before reaching the mmu, perhaps certain core registers and perhaps the mmu controls themselves. But other things like memory and peripherals may be addressed on the other side of the mmu, on the other side of the cache which is often the next layer of the onion outside the mmu. When polling your serial port for example to see if there is another byte available you dont want the data access to be cached such that the first read of the serial port status register actually goes out on the physical bus and touches the serial port, then all subsequent reads read the stale version in the cache. You do want this for ram values, the purpose of the cache, but for volatile things like status registers this is very bad. So depending on your system you are likely not able to turn on the data cache until the mmu is enabled. The memory interface on an ARM for example has control bits that indicate what type of access it is, is it a non-cacheable access a cacheable, part of a burst, that sort of thing. So you can enable instruction caching independent of data caching and without the mmu on it will pass these control signals straight on through to the cache controller which then is connected to the outside world (if it didnt handle the transaction). So your instruction fetch can be cached everything else not cached. But to cache data ram accesses but not status registers from the serial port what you need to do is setup the tables for the mmu and in your embedded environment you may choose to simply map the ram one to one, meaning address 0x1000 virtual becomes 0x1000 physical, but you can now enable the data cache bit for that chunk of memory. Then for your serial port you can map virtual to physical addresses but you clear the data cache enable bit for that chunk of memory space. Now you can enable the data cache, memory reads are now cached (because the control signals as they pass through the mmu are marked as such, but for your register access the control signals indicate non-cacheable).

您肯定不会有虚拟到身体上的映射到一个,依赖于嵌入式与否嵌入式操作系统与否,等等。但是,这是你的保护进来。最简单的操作系统中看到的。在应用层的应用程序不应该被允许在受保护的系统内存,内核,等拿到不应该是能够揍的家伙应用程序的内存空间。因此,当应用程序切换中,MMU的表反映了内存允许它访问和什么内存不允许访问。不是由程序所允许的任何地址由MMU,异常/故障(中断)捕获产生和内核/主管得到控制并能处理该程序。你可能还记得从早期的Windows天的所谓一般性保护错误,在该公司的营销和其他利益集团前决定,我们应该改变名称,它是直接从英特尔手册,当你有一个错中断被解雇该剪掉落入其他类,选择题的测试鲍勃,B特德,C爱丽丝,D以上皆非。一般的保护错误是上述categetory的没话说,但应用最广泛的打击,因为这是你得到了什么当你的程序试图访问内存或I / O分配给它的内存空间之外。

You certainly do not have to map virtual to physical one to one, depends on embedded or not embedded, operating system or not, etc. But this is where your protection comes in. Easiest to see in an operating system. An application at the application layer should not be allowed to get at protected system memory, the kernel, etc. Should not be able to clobber fellow applications memory space. So when the application is switched in, the mmu tables reflect what memory it is allowed to access and what memory it is not allowed to access. Any address not permitted by the program is caught by the mmu, an exception/fault (interrupt) is generated and the kernel/supervisor gets control and can deal with that program. You may remember the term "general protection fault" from the earlier windows days, before marketing and other interest groups in the company decided we should change the name, it was straight out of the intel manual, that interrupt was fired when you had a fault that didnt fall into other categories, like a multiple choice question on a test A bob, B ted, C alice, D none of the above. The general protection fault was the none of the above categetory, yet the most widely hit because that is what you got when your program tried to access memory or i/o outside its allocated memory space.

从MMU的另一个好处是malloc的。 MMU的前内存的alloc不得不用计划重新安排内存,以保持大空块在中间。对于下一个大的malloc,以尽量减少与4meg免费的,为什么我的1K字节的alloc失败?。现在像一个磁盘,你砍内存空间到这些4K字节或类似大小的块。这是大小是一个块以下一个malloc,拿在内存中的任何空闲块使用MMU表条目在它指向并给调用者绑到MMU入口的虚拟地址。你想4096 * 10字节,关键是没有找到那么多线性内存,但发现10线性MMU表项,需要的内存(不neccesarily相邻)的10块,并把它们的物理地址中的10项MMU。

Another benefit from mmus is malloc. Before mmus the memory alloc had to use schemes to re-arrange memory to keep large empty blocks in the middle. for that next big malloc, to minimize the "with 4meg free why did my 1kbyte alloc fail?". Now, like a disk, you chop memory space up into these 4kbyte or some such size chunks. A malloc that is one chunk or less in size, take any free chunk in memory use an mmu table entry to point at it and give the caller the virtual address tied to that mmu entry. You want 4096*10 bytes, the trick is not having to find that much linear memory but finding 10 linear mmu table entries, take any 10 chunks of memory (not neccesarily adjacent) and put their physical addresses in the 10 mmu entries.

底线,它如何做它的是,它位于通常,处理器和缓存的物理存储器总线之间,或者如果没有高速缓存。 MMU的逻辑看地址,使用它来寻找到一个表中。在表中的位包括物理地址加上包括缓存一些控制信号,再加上表示,如果这是一个有效的条目或保护的区域的一些方法。如果该地址是受保护的MMU触发中断/事件回核心。如果有效它修改对MMU与位象高速缓存位的其它/外的虚拟地址成为物理地址是用来告诉任何是在MMU什么类型的事务,这是,指令,数据的另一侧,缓存,爆裂等。对于一个嵌入式,非操作系统,单任务系统,你可能只需要一个MMU表。在操作系统的快速方法执行,例如保护,就会有每个应用程序的表或表的子集(这棵树像类似的目录结构),这样当你任务切换,你只需要改变一件事,表或树的一个分支的开始的开始到虚拟改变物理地址和分配的存储器(保护)为树的那个分支。

The bottom line, "how" it does it is that it sits usually between the processor and the cache or if no cache the physical memory bus. The mmu logic looks at the address, uses that to look into a table. The bits in the table include the physical address plus some control signals which include cacheable, plus some way of indicating if this is a valid entry or a protected region. If that address is protected the mmu fires an interrupt/event back to the core. If valid it modifies the virtual address to become the physical address on the other/outside of the mmu and bits like the cacheable bit are used to tell whatever is on the other side of the mmu what type of transaction this is, instruction, data, cacheable, burst, etc. For an embedded, non-os, single tasking system you may only need a single mmu table. A quick way in an operating system to perform protection for example would be to have a table per application or a subset of the table (which tree like similar to a directory structure) such that when you task switch you only have to change one thing, the start of the table or the start of one branch of the tree to change the virtual to physical addresses and allocated memory (protection) for that branch of the tree.

这篇关于MMU(内存管理单元)单元的处理器如何保护内存段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆