映射DMA缓冲区用户空间 [英] Mapping DMA buffers to userspace

查看:1298
本文介绍了映射DMA缓冲区用户空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写在Linux-2.6.26设备驱动程序。我想有映射到用户空间从驱动器发送数据到用户空间应用DMA缓冲区。请建议它的一些很好的教程。

i am writing a device driver on linux-2.6.26. I want to have a dma buffer mapped into userspace for sending data from driver to userspace application. Please suggest some good tutorial on it.

感谢

推荐答案

下面是我用,总之...

Here is what I have used, in brief...

get_user_pages 引脚用户页面(S),并给您结构页* 指针数组。

get_user_pages to pin the user page(s) and give you an array of struct page * pointers.

dma_map_page 每个结构页* 来获取DMA地址(又名I / O地址)的页面。这也创造了IOMMU映射(如果需要的话你的平台上)。

dma_map_page on each struct page * to get the DMA address (aka. "I/O address") for the page. This also creates an IOMMU mapping (if needed on your platform).

现在告诉设备执行DMA到使用这些DMA地址的内存。显然,它们可以是不连续的;存储器只保证在页面大小的倍数连续

Now tell your device to perform the DMA into the memory using those DMA addresses. Obviously they can be non-contiguous; memory is only guaranteed to be contiguous in multiples of the page size.

dma_sync_single_for_cpu 做任何必要的缓存刷新或反弹缓冲区的Blitting或什么的。此调用保证了CPU实际上可以看到DMA的结果,因为在许多系统上,修改在陈旧的缓存CPU的返回结果背后的物理内存。

dma_sync_single_for_cpu to do any necessary cache flushes or bounce buffer blitting or whatever. This call guarantees that the CPU can actually see the result of the DMA, since on many systems, modifying physical RAM behind the CPU's back results in stale caches.

dma_unmap_page 来释放IOMMU映射(如果它需要你的平台上)。

dma_unmap_page to free the IOMMU mapping (if it was needed on your platform).

put_page 来未销的用户页面(S)。

put_page to un-pin the user page(s).

请注意,您的必须的通过这里检查错误一路,因为有资源有限,所有的地方。 get_user_pages 返回一个负数彻底错误(-errno),但它可以返回一个正数来告诉你有多少页,它实际上设法让(物理内存不无限)。如果这是比你少要求,你还是必须通过所有网页它的没有的脚环,以便调用 put_page 在他们身上。 (否则你正在泄漏内核内存;非常糟糕)

Note that you must check for errors all the way through here, because there are limited resources all over the place. get_user_pages returns a negative number for an outright error (-errno), but it can return a positive number to tell you how many pages it actually managed to pin (physical memory is not limitless). If this is less than you requested, you still must loop through all of the pages it did pin in order to call put_page on them. (Otherwise you are leaking kernel memory; very bad.)

dma_map_page 也可以返回一个错误(-errno),因为IOMMU映射是另一种有限的资源。

dma_map_page can also return an error (-errno), because IOMMU mappings are another limited resource.

dma_unmap_page put_page 收益无效,像往常一样Linux的释放的功能。 (Linux内核资源管理程序只返回错误,因为一些真正出了问题,不是因为你搞砸了,并通过一个坏指针或什么的。基本的假设是,你的从不的搞砸了,因为这是内核$ C $角虽然 get_user_pages 并检查以确保用户地址的有效性,如果用户递给你一个坏的指针将返回一个错误。)

dma_unmap_page and put_page return void, as usual for Linux "freeing" functions. (Linux kernel resource management routines only return errors because something actually went wrong, not because you screwed up and passed a bad pointer or something. The basic assumption is that you are never screwing up because this is kernel code. Although get_user_pages does check to ensure the validity of the user addresses and will return an error if the user handed you a bad pointer.)

您也可以考虑,如果你想有一个友好的界面来分散/集中使用_sg功能。然后,你会叫 dma_map_sg 而不是 dma_map_page dma_sync_sg_for_cpu 代替的 dma_sync_single_for_cpu

You can also consider using the _sg functions if you want a friendly interface to scatter/gather. Then you would call dma_map_sg instead of dma_map_page, dma_sync_sg_for_cpu instead of dma_sync_single_for_cpu, etc.

另外请注意,许多这些功能可能会更多或更少的平台上无操作,这样你就可以经常逃脱被马虎。 (特别是dma_sync _...和dma_unmap _...做我的x86_64系统上什么都没有。)但是,在这些平台上,调用自己会被编译成什么都没有,所以没有任何借口做事马虎。

Also note that many of these functions may be more-or-less no-ops on your platform, so you can often get away with being sloppy. (In particular, dma_sync_... and dma_unmap_... do nothing on my x86_64 system.) But on those platforms, the calls themselves get compiled into nothing, so there is no excuse for being sloppy.

这篇关于映射DMA缓冲区用户空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆