为什么我不应该在ARMv6 +的系统内存上使用ioremap? [英] Why shouldn't I use ioremap on system memory for ARMv6+?

查看:166
本文介绍了为什么我不应该在ARMv6 +的系统内存上使用ioremap?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要从内核保留一个较大的物理上连续的RAM缓冲区,并能够保证该缓冲区将始终使用特定的硬编码物理地址.该缓冲区应在内核的整个生命周期内保留.我已经编写了一个chardev驱动程序作为访问用户空间中此缓冲区的接口.我的平台是具有ARMv7架构的嵌入式系统,运行2.6 Linux内核.

I need to reserve a large buffer of physically contiguous RAM from the kernel and be able to gaurantee that the buffer will always use a specific, hard-coded physical address. This buffer should remain reserved for the kernel's entire lifetime. I have written a chardev driver as an interface for accessing this buffer in userspace. My platform is an embedded system with ARMv7 architecture running a 2.6 Linux kernel.

第三版Linux设备驱动程序的第15章对主题进行了以下说明(第443页):

Chapter 15 of Linux Device Drivers, Third Edition has the following to say on the topic (page 443):

保留RAM的顶部是通过在引导时将mem=参数传递给内核来实现的.例如,如果您有256 MB,则参数mem=255M可以防止内核使用前兆字节.您的模块以后可以使用以下代码来访问此类内存: dmabuf = ioremap (0xFF00000 /* 255M */, 0x100000 /* 1M */);

Reserving the top of RAM is accomplished by passing a mem= argument to the kernel at boot time. For example, if you have 256 MB, the argument mem=255M keeps the kernel from using the top megabyte. Your module could later use the following code to gain access to such memory: dmabuf = ioremap (0xFF00000 /* 255M */, 0x100000 /* 1M */);

我已经做到了,再加上其他一些事情:

I've done that plus a couple of other things:

  1. 除了mem之外,我还在使用memmap bootarg. 内核启动参数文档建议始终使用每当您使用mem以避免地址冲突时.
  2. 在调用ioremap之前,我使用过request_mem_region,当然,在继续操作之前,我会检查它是否成功.
  1. I'm using the memmap bootarg in addition to the mem one. The kernel boot parameters documentation suggests always using memmap whenever you use mem to avoid address collisions.
  2. I used request_mem_region before calling ioremap and, of course, I check that it succeeds before moving ahead.

这是我完成所有操作后的系统外观:

This is what the system looks like after I've done all that:

# cat /proc/cmdline 
root=/dev/mtdblock2 console=ttyS0,115200 init=/sbin/preinit earlyprintk debug mem=255M memmap=1M$255M
# cat /proc/iomem 
08000000-0fffffff : PCIe Outbound Window, Port 0
  08000000-082fffff : PCI Bus 0001:01
    08000000-081fffff : 0001:01:00.0
    08200000-08207fff : 0001:01:00.0
18000300-18000307 : serial
18000400-18000407 : serial
1800c000-1800cfff : dmu_regs
18012000-18012fff : pcie0
18013000-18013fff : pcie1
18014000-18014fff : pcie2
19000000-19000fff : cru_regs
1e000000-1fffffff : norflash
40000000-47ffffff : PCIe Outbound Window, Port 1
  40000000-403fffff : PCI Bus 0002:01
    40000000-403fffff : 0002:01:00.0
  40400000-409fffff : PCI Bus 0002:01
    40400000-407fffff : 0002:01:00.0
    40800000-40807fff : 0002:01:00.0
80000000-8fefffff : System RAM
  80052000-8045dfff : Kernel text
  80478000-80500143 : Kernel data
8ff00000-8fffffff : foo

到目前为止,一切都看起来不错,并且我的驱动程序运行正常.我可以直接读写选择的特定物理地址.

Everything so far looks good, and my driver is working perfectly. I'm able to read and write directly to the specific physical address I've chosen.

但是,在启动过程中,触发了一个大的可怕警告():

However, during bootup, a big scary warning () was triggered:

BUG: Your driver calls ioremap() on system memory.  This leads
to architecturally unpredictable behaviour on ARMv6+, and ioremap()
will fail in the next kernel release.  Please fix your driver.
------------[ cut here ]------------
WARNING: at arch/arm/mm/ioremap.c:211 __arm_ioremap_pfn_caller+0x8c/0x144()
Modules linked in:
[] (unwind_backtrace+0x0/0xf8) from [] (warn_slowpath_common+0x4c/0x64)
[] (warn_slowpath_common+0x4c/0x64) from [] (warn_slowpath_null+0x1c/0x24)
[] (warn_slowpath_null+0x1c/0x24) from [] (__arm_ioremap_pfn_caller+0x8c/0x144)
[] (__arm_ioremap_pfn_caller+0x8c/0x144) from [] (__arm_ioremap_caller+0x50/0x58)
[] (__arm_ioremap_caller+0x50/0x58) from [] (foo_init+0x204/0x2b0)
[] (foo_init+0x204/0x2b0) from [] (do_one_initcall+0x30/0x19c)
[] (do_one_initcall+0x30/0x19c) from [] (kernel_init+0x154/0x218)
[] (kernel_init+0x154/0x218) from [] (kernel_thread_exit+0x0/0x8)
---[ end trace 1a4cab5dbc05c3e7 ]---

从以下来源触发: arc/arm/mm/ioremap.c

/*
 * Don't allow RAM to be mapped - this causes problems with ARMv6+
 */
if (pfn_valid(pfn)) {
    printk(KERN_WARNING "BUG: Your driver calls ioremap() on system memory.  This leads\n"
           KERN_WARNING "to architecturally unpredictable behaviour on ARMv6+, and ioremap()\n"
           KERN_WARNING "will fail in the next kernel release.  Please fix your driver.\n");
    WARN_ON(1);
}

这到底会导致什么问题?它们可以缓解吗?我有什么选择?

What problems, exactly, could this cause? Can they be mitigated? What are my alternatives?

推荐答案

所以我已经完全做到了,并且可以正常工作.

So I've done exactly that, and it's working.

提供内核命令行(例如/proc/cmdline )和生成的内存映射(即/proc/iomem )以进行验证.

Provide the kernel command line (e.g. /proc/cmdline) and the resulting memory map (i.e. /proc/iomem) to verify this.

这到底会导致什么问题?

What problems, exactly, could this cause?

在系统内存上使用 ioremap()的问题是,最终给内存分配了冲突的属性,这会导致不可预测的"行为.
请参阅文章>"ARM的多重映射内存混乱" ,其中提供了有关警告您的历史记录正在触发.

The problem with using ioremap() on system memory is that you end up assigning conflicting attributes to the memory which causes "unpredictable" behavior.
See the article "ARM's multiply-mapped memory mess", which provides a history to the warning that you are triggering.

ARM内核通过写回缓存将RAM映射为普通内存;它也标记为在单处理器系统上不共享.用于映射I/O内存以供CPU使用的ioremap()系统调用是不同的:该内存被映射为设备内存,未缓存并且可能是共享的.这些不同的映射为两种类型的内存提供了预期的行为.事情变得棘手的地方是有人调用ioremap()为系统RAM创建新的映射.

The ARM kernel maps RAM as normal memory with writeback caching; it's also marked non-shared on uniprocessor systems. The ioremap() system call, used to map I/O memory for CPU use, is different: that memory is mapped as device memory, uncached, and, maybe, shared. These different mappings give the expected behavior for both types of memory. Where things get tricky is when somebody calls ioremap() to create a new mapping for system RAM.

这些多个映射的问题在于它们将具有不同的属性.从ARM体系结构的版本6开始,这种情况下的指定行为是不可预测的".

The problem with these multiple mappings is that they will have differing attributes. As of version 6 of the ARM architecture, the specified behavior in that situation is "unpredictable."

请注意,系统内存"是由内核管理的RAM.
您触发警告的事实表明您的代码正在为一个内存区域生成多个映射.

Note that "system memory" is the RAM that is managed by the kernel.
The fact that you trigger the warning indicates that your code is generating multiple mappings for a region of memory.

它们可以缓解吗?

Can they be mitigated?

您必须确保要 ioremap()的RAM不是系统内存",即由内核管理.
另请参见此答案.

You have to ensure that the RAM you want to ioremap() is not "system memory", i.e. managed by the kernel.
See also this answer.

附录

这个与您有关的警告是 pfn_valid(pfn)返回TRUE而不是FALSE的结果.
根据您为2.6.37版本提供的Linux交叉引用链接, pfn_valid()只是返回

This warning that concerns you is the result of pfn_valid(pfn) returning TRUE rather than FALSE.
Based on the Linux cross-reference link that you provided for version 2.6.37, pfn_valid() is simply returning the result of

memblock_is_memory(pfn << PAGE_SHIFT);  

依次返回

memblock_search(&memblock.memory, addr) != -1;  

我建议对内核代码进行黑客攻击,以便揭示冲突.
在调用 ioremap()之前,将TRUE分配给全局变量memblock_debug.
以下修补程序应显示有关内存冲突的重要信息.
(memblock列表按基地址排序,因此 memblock_search()在此列表上执行二进制搜索,因此使用mid作为索引.)

I suggest that the kernel code be hacked so that the conflict is revealed.
Before the call to ioremap(), assign TRUE to the global variable memblock_debug.
The following patch should display the salient information about the memory conflict.
(The memblock list is ordered by base-address, so memblock_search() performs a binary search on this list, hence the use of mid as the index.)

 static int __init_memblock memblock_search(struct memblock_type *type, phys_addr_t addr)
 {
         unsigned int left = 0, right = type->cnt;

         do {
                 unsigned int mid = (right + left) / 2;

                 if (addr < type->regions[mid].base)
                         right = mid;
                 else if (addr >= (type->regions[mid].base +
                                   type->regions[mid].size))
                         left = mid + 1;
-                else
+                else {
+                        if (memblock_debug)
+                                pr_info("MATCH for 0x%x: m=0x%x b=0x%x s=0x%x\n", 
+                                                addr, mid, 
+                                                type->regions[mid].base, 
+                                                type->regions[mid].size);
                         return mid;
+                }
         } while (left < right);
         return -1;
 }

如果要查看所有内存块,请使用变量memblock_debug为TRUE调用 memblock_dump_all().

If you want to see all the memory blocks, then call memblock_dump_all() with the variable memblock_debug is TRUE.

[有趣的是,这本质上是一个编程问题,但我们尚未看到您的任何代码.]

[Interesting that this is essentially a programming question, yet we haven't seen any of your code.]

附录2

由于您可能正在使用ATAG(而不是设备树),并且要专用于内存区域,因此请修复ATAG_MEM以反映此较小的物理内存大小.
假设您对引导代码进行了零更改,则ATAG_MEM仍在指定完整的RAM,因此,这可能是引起警告的系统内存冲突的根源.
请参阅有关ATAG的答案这相关答案.

Since you're probably using ATAGs (instead of Device Tree), and you want to dedicate a memory region, fix up the ATAG_MEM to reflect this smaller size of physical memory.
Assuming you have made zero changes to your boot code, the ATAG_MEM is still specifying the full RAM, so perhaps this could be the source of the system memory conflict that causes the warning.
See this answer about ATAGs and this related answer.

这篇关于为什么我不应该在ARMv6 +的系统内存上使用ioremap?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆