dma_mmap_coherent和remap_pfn_range有什么区别? [英] What is the difference between dma_mmap_coherent and remap_pfn_range?

查看:315
本文介绍了dma_mmap_coherent和remap_pfn_range有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我正在使用示例驱动程序来学习,并以此为基础建立了自己的自定义驱动程序. mmap代码几乎相同,除了以下事实:我允许用户管理自己要求的大小,并以此为基础分配我的内存分配,以及我在/dev中自动创建char设备的事实.

Currently, I am using an example driver to learn from, and from which I have based my own custom driver around. The mmap code is very nearly identical, save for the fact that I allow the user to manage their own requested size and base my memory allocation around that and the fact that I automatically create the char device within /dev.

为了解释上下文,对于我的用例,我想缩小我遇到的问题. dma_mmap_coherent在使用kmalloc的内存时可以正常工作,但是当我有一个保留的物理地址区域要与remap_pfn_range一起使用时,它似乎可以正常工作,并且dmesg不会报告任何错误,但是当我去阅读时,不管我在那写什么,它总是返回0xff字节.无论我是否使用iowrite& ioremap内存或尝试使用小型mmap'ing用户态测试在用户态中写入后,在内核态中执行ioread.

To explain the context, for my use case, I'd like to narrow out an issue that I'm having. dma_mmap_coherent testably works when using kmalloc'd memory, but when I have a reserved physical address region that I want to use remap_pfn_range with it quietly appears to work, and dmesg doesn't report any errors, but when I go to read, no matter what I've written there it always returns 0xff bytes. This is true whether I use the iowrite & ioread in kernel land after ioremap'ing the memory or trying to write in userland using a small mmap'ing userland test.

我已经尽我所能对这个主题进行了很多研究.我能找到的有关remap_pfn_range的文档都是内核.org页面,以及remap_pfn_range上的一些内核gmain邮件列表归档文件替换了remap_page_range.至于dma_mmap_coherent,我能够找到更多信息, linux归档文件中的演示文稿.

I've done as much research on the topic as I can I think. All I can find for documentation of remap_pfn_range is the kernel.org page, and some kernel gmain mailing list archives on remap_pfn_range replacing remap_page_range. As for dma_mmap_coherent, I was able to find a little bit more, including a presentation from the linux archives.

最终必须有所不同;似乎有很多不同的方法可以将内核内存映射到用户领域.我有一个特别的问题:dma_mmap_coherentremap_pfn_range有什么区别?

Ultimately there has to be a difference; there seems to be so many different ways to map kernel memory into user land. The particular question I have is: what is the difference between dma_mmap_coherent and remap_pfn_range?

编辑可能总的来说,将内核内存映射到用户区的方法可能是不错的概述,其中涵盖了如何在内核驱动程序mmap回调中使用不同的api.

Edit it might be nice to provide a general overview of the ways to map kernel memory into userland in general, covering how different apis would be used in a kernel driver mmap callback.

推荐答案

dma_mmap_coherent()在

dma_mmap_coherent() is defined in dma-mapping.h as a wrapper around dma_mmap_attrs(). dma_mmap_attrs() tries to see if a set of dma_mmap_ops is associated with the device (struct device *dev) you are operating with, if not it calls dma_common_mmap() which eventually leads to a call to remap_pfn_range(), after setting the page protection as non-cacheable (see dma_common_mmap() in dma-mapping.c).

关于将内核内存映射到用户空间工作的一般概述,以下是我从用户空间映射DMA缓冲区的快速简便方法:

As to a general overview of mmap'ing kernel memory to user space works, the following is my quick and simple way of mmap'ing DMA buffers from user space :

  1. 通过IOCTL分配缓冲区,并使用一些标志为每个缓冲区指定缓冲区ID:

  1. Allocate a Buffer via an IOCTL and designate a buffer ID for each buffer with some flags:

/* A copy-from-user call needs to be done before in the IOCTL */
static int my_ioctl_alloc(struct my_struct *info, struct alloc_info *alloc)
{

        ...
        info->buf->kvaddr = dma_alloc_coherent(dev, alloc->size, info->buf->phyaddr, GFP_KERNEL);
        info->buf->buf_id = alloc->buf_id;
        ...
}

  • 定义mmap文件操作:

  • Define an mmap file ops :

    static const struct file_operations my_fops = {
            .open = my_open,
            .close = my_close,
            .mmap = my_mmap,
            .unlocked_ioctl = my_ioctl,
    };
    

    不要忘记在驱动程序的探测功能中的某个位置注册my_fops结构.

    Do not forget to register the my_fops struct somewhere in your driver's probe function.

    执行mmap文件操作:

    Implement mmap file ops :

     static int my_mmap(struct file *fptr, struct vm_area_struct *vma)
     {
             ...
             desc_id = vma->vm_pgoff;
             buf = find_buf_by_id(alloc, desc_id);
             vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
             ret = remap_pfn_range(vma, vma->vm_start, buf->phyaddr >> PAGE_SHIFT, vma->vm_end - vma->vm_start, vma->vm_page_prot);
             if (ret) {
                  /* Error Handle */
             }
             return 0;
     }
    

  • 有了这个,您的内核驱动程序应该具有最小的分配和mmap缓冲区.释放缓冲区是奖励积分的练习!

    With this your kernel driver should have the minimum to allocate and mmap buffers. Freeing the buffers is an exercise for the bonus points!

    在应用程序中,在执行复制到内核之前,您将打开()文件并获得有效的文件描述符fd,调用分配的IOCTL并设置缓冲区ID.在mmap中,您可以通过offset参数提供缓冲区ID:

    In the application, you would open() the file and get a valid file descriptor fd, call the allocate IOCTL and set the buffer ID before performing a copy-to-kernel. In the mmap, you would give the buffer ID via the offset parameter :

          mmap(NULL, buf_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buffer_id << PAGE_SHIFT);
    

    PAGE_SHIFT是内核中固定的与体系结构相关的编译时MACRO. 希望这可以帮助.

    PAGE_SHIFT is an architecture dependent compile time MACRO fixed in the kernel. Hope this helps.

    这不是checkpatch.pl兼容的代码,也不是最佳实践,但这是我知道如何执行此操作的一种方式.欢迎评论/改进/建议!

    This is not checkpatch.pl compliant code, nor is this the best practice, but it's one way I know how to do this. Comments/improvements/suggestions welcome!

    有关教科书示例和感兴趣的读者的详细背景信息,请参见Linux设备驱动程序-第15章:内存映射和DMA.

    See Linux Device Drivers - Chapter 15: Memory Mapping and DMA for the textbook examples and good background information for the interested reader.

    这篇关于dma_mmap_coherent和remap_pfn_range有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆