如何映射的物理内存1GB(或以上) [英] How to map 1GB (or more) of physical memory

查看:327
本文介绍了如何映射的物理内存1GB(或以上)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有2GB的内存一个设置和我想的物理内存1GB(或更多)映射到用户空间的虚拟地址。它是在理论上是可行的,因为有32位设置,虚拟地址的3GB可供用户应用的土地

I have a setup with 2GB of memory and I would like to map 1GB (or more) of physical memory into user space virtual address. It is in theory possible since with 32bits setup, 3GB of virtual address is available to user land apps.

我的内核命令行更新,包含下列参数:纪念品= 1G MEMMAP = 1G $ 1G
给力的内核,看看1GB的RAM,并保留最后的1GB。

I updated the kernel command line with the following parameters: mem=1G memmap=1G$1G to force the kernel to see 1GB of RAM and to reserve the last 1GB.

我将处理用户空间mmap()的调用和函数映射remap_pfn_range物理地址为0x40000000(1G)到用户空间地址()我的自定义驱动程序。
但功能触发remap_pte_range内核错误()()。使用相同的调用使用300MB重映射,而不是1GB的工作。

I have my custom driver that will handle the user space mmap() call and map the physical address 0x40000000 (1G) to user space address with the function remap_pfn_range(). But the function triggers a kernel BUG() in remap_pte_range(). The same call used to work with a 300MB remap instead of 1GB.

我通常使用调用的ioremap()在我的司机到物理地址映射到内核虚拟地址。在这种情况下,我不能因为1G / 3G虚拟地址拆分(1G内核,为3G的应用程序)的。所以我在想,如果有可能不会在核映射这些物理地址映射物理地址到用户空间的虚拟地址?

I usually use to call ioremap() in my driver to map physical address into kernel virtual address. In this case, I can't because of 1G/3G virtual addresses split (1G for kernel, 3G for apps). So I was wondering if it is possible to map physical address into user space virtual address without mapping these physical address in the kernel ?

先谢谢了。

推荐答案

为什么你的remap_pfn_range呼叫触发内核BUG()

BUG_ON 宏 remap_pfn_range 召唤:// LXR。 free-electrons.com/source/mm/memory.c#L2277相对=nofollow>这里

The call to the BUG_ON macro in remap_pfn_range as per here

2277 BUG_ON(地址> =结束);

remap_pfn_range 要求 remap_pud_range 这就要求 remap_pmd_range 这就要求 remap_pte_range

remap_pfn_range calls remap_pud_range which calls remap_pmd_range which calls remap_pte_range.

要后续调用 BUG_ON VM_BUG_ON remap_pmd_range 这里

2191 VM_BUG_ON(pmd_trans_huge(* PMD));

remap_pte_range 此处

2171 BUG_ON(pte_none(* PTE)!);

BUG_ON 宏定义的这里

的#define BUG_ON(条件){做,如果(不可能的(条件))BUG(); }而(0)

其中, BUG 宏上面的的打印信息和恐慌。

where BUG macro is defined above it to print a message and panic.

不可能宏定义的这里

#定义不太可能(X)(__builtin_expect(!(x),0))

因此​​,当目标用户地址在地址开始大于或等于其定义为结束 结束=地址+ PAGE_ALIGN(大小); ,BUG_ON返回1,并调用BUG

So when the target user address to start at addr is greater than or equal to end which is defined as end = addr + PAGE_ALIGN(size);, BUG_ON returns 1 and calls BUG.

或者当 pmd_trans_huge 定义的这里

153 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
154 static inline int pmd_trans_splitting(pmd_t pmd)
155 {
156         return pmd_val(pmd) & _PAGE_SPLITTING;
157 }
158 
159 static inline int pmd_trans_huge(pmd_t pmd)
160 {
161         return pmd_val(pmd) & _PAGE_PSE;
162 }
163 
164 static inline int has_transparent_hugepage(void)
165 {
166         return cpu_has_pse;
167 }

返回0,当CONFIG_TRANSPARENT_HUGEPAGE未在内核或者如果配置发生这种情况
PMD (页使用Metadate)值或&安培; _PAGE_PSE

returns 0, this occurs when CONFIG_TRANSPARENT_HUGEPAGE isn't configured in the kernel or if the pmd (Page Metadate) value or & _PAGE_PSE

或者当 pte_none 返回1​​,如果相应条目不存在,0,如果它的存在。

Or whenpte_none returns 1 if the corresponding entry does not exist and 0 if it exists.

所以!pte_none 返回0 1其他明智的条件传递到 BUG_ON

Therefore !pte_none returns 0 when the corresponding page table entry does not exist and 1 other wise as the condition passed into BUG_ON.

如果页表项已经存在,则调用 BUG 宏发生。

If the page table entry already exists then the call to BUG macro occurs.

如果您指定一个较低的内存比!GB一个金额大于300MB,500MB说800MB或会发生什么?

What happens if you specify a lower a amount of memory than !GB that is greater than 300MB , say 500MB or 800MB ?

因此​​,无论你的起始地址是比你的结局地址大,否则你 CONFIG_TRANSPARENT_HUGEPAGE 没有在内核配置,或者你是指页面的元数据不存在或已经存在的页表项。

So either your starting address is greater than your ending address, or you CONFIG_TRANSPARENT_HUGEPAGE isn't configured in the kernel or you are referring to Page Metadata doesn't exist or Page Table entries that already exist.

从意见明确,你打电话给 remap_pfn_range 引用页表项指针或 *私人那些已经指向到页表项或私人

Clarifying from the comments, your call to remap_pfn_range references Page Table Entry pointers or *ptethat are already pointing to a page table entry or pte.

这意味着 set_pte_at(毫米,地址,私人,pte_mkspecial(pfn_pte(PFN,PROT))); 将失败,因为私人的指针已经指向页表条目,因此不能设置为私人 pte_mkspecial(pfn_pte(PFN,PROT))

This means that set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot))); would fail as the pte pointer already points to a page table entry and hence can't be set to the pte that is pte_mkspecial(pfn_pte(pfn, prot)).

绕过1G / 3G虚拟地址拆分

请参阅下面的文章高内存在Linux内核中

请参阅下面的邮件列表帖子,其中讨论了有关HIGHMEM一些额外的信息用最少1GB的RAM的。

See the following mailing list post, which discusses some additional information about HIGHMEM with a minimum of 1GB of RAM.

上映射的内核和非内核虚拟地址空间用户信息土地

到内核虚拟地址和非内核映射的一种方式(由vmalloc的()返回)到用户空间的虚拟地址使用 remap_pfn_range 。请参见 Linux的内存映射了解更多信息。

One way to map kernel virtual addresses and non kernel (returned by vmalloc()) virtual addresses to userspace is using remap_pfn_range. See Linux Memory Mapping for additional information.

这是替换了nopage处理程序的使用对旧内核的另一种方法是 vm_insert_page 函数

Another way that replaced the usage of the nopage handler on older kernels is the vm_insert_page function

其他资源包括:

  • Kernel Space - User Space Interfaces
  • DeviceDriverMmap Linux Memory Management Wiki
  • The evolution of driver page remapping
  • Faulting out populate(), nopfn(), and nopage()
  • Understanding the Linux Virtual Memory Manager
  • Mmap for Linux Drivers
  • Linux Device Drivers 3rd Edition Chapter 15.1. Memory Management in Linux
  • Linux Device Drivers 3rd Edition Chapter 15.2. The mmap Device Operation
  • Linux Device Drivers 3rd Edition Chapter 15.4. Direct Memory Access
  • Linux Device Drivers 3rd Edition Chapter 9.4. Using I/O Memory

这篇关于如何映射的物理内存1GB(或以上)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆