将 DMA 缓冲区映射到用户空间 [英] Mapping DMA buffers to userspace

查看:23
本文介绍了将 DMA 缓冲区映射到用户空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在 linux-2.6.26 上编写设备驱动程序.我想将 dma 缓冲区映射到用户空间,以便将数据从驱动程序发送到用户空间应用程序.请推荐一些关于它的好教程.

i am writing a device driver on linux-2.6.26. I want to have a dma buffer mapped into userspace for sending data from driver to userspace application. Please suggest some good tutorial on it.

谢谢

推荐答案

这里是我用过的,简要...

Here is what I have used, in brief...

get_user_pages 固定用户页面并为您提供一组 struct page * 指针.

get_user_pages to pin the user page(s) and give you an array of struct page * pointers.

dma_map_page 在每个 struct page * 上获取页面的 DMA 地址(又名I/O 地址").这也会创建一个 IOMMU 映射(如果您的平台需要).

dma_map_page on each struct page * to get the DMA address (aka. "I/O address") for the page. This also creates an IOMMU mapping (if needed on your platform).

现在告诉您的设备使用这些 DMA 地址将 DMA 执行到内存中.显然,它们可以是不连续的;内存只能保证在页面大小的倍数中是连续的.

Now tell your device to perform the DMA into the memory using those DMA addresses. Obviously they can be non-contiguous; memory is only guaranteed to be contiguous in multiples of the page size.

dma_sync_single_for_cpu 执行任何必要的缓存刷新或反弹缓冲区位块传输或其他任何操作.此调用确保 CPU 可以实际看到 DMA 的结果,因为在许多系统上,修改 CPU 背后的物理 RAM 会导致缓存失效.

dma_sync_single_for_cpu to do any necessary cache flushes or bounce buffer blitting or whatever. This call guarantees that the CPU can actually see the result of the DMA, since on many systems, modifying physical RAM behind the CPU's back results in stale caches.

dma_unmap_page 以释放 IOMMU 映射(如果您的平台需要它).

dma_unmap_page to free the IOMMU mapping (if it was needed on your platform).

put_page 取消固定用户页面.

请注意,您必须从头到尾检查错误,因为到处都是有限的资源.get_user_pages 为彻底错误 (-errno) 返回一个负数,但它可以返回一个正数来告诉您它实际设法固定了多少页(物理内存不是无限的).如果这比您请求的少,您仍然必须遍历它did 固定的所有页面,以便对它们调用 put_page.(否则你会泄漏内核内存;非常糟糕.)

Note that you must check for errors all the way through here, because there are limited resources all over the place. get_user_pages returns a negative number for an outright error (-errno), but it can return a positive number to tell you how many pages it actually managed to pin (physical memory is not limitless). If this is less than you requested, you still must loop through all of the pages it did pin in order to call put_page on them. (Otherwise you are leaking kernel memory; very bad.)

dma_map_page 也可以返回错误 (-errno),因为 IOMMU 映射是另一个有限的资源.

dma_map_page can also return an error (-errno), because IOMMU mappings are another limited resource.

dma_unmap_pageput_page 返回 void,就像 Linux 的释放"函数一样.(Linux 内核资源管理例程只返回错误,因为确实出了问题,而不是因为你搞砸了并传递了一个错误的指针或其他东西.基本假设是你从不搞砸,因为这是内核代码. 虽然 get_user_pages 确实会检查以确保用户地址的有效性,并且如果用户给你一个错误的指针会返回一个错误.)

dma_unmap_page and put_page return void, as usual for Linux "freeing" functions. (Linux kernel resource management routines only return errors because something actually went wrong, not because you screwed up and passed a bad pointer or something. The basic assumption is that you are never screwing up because this is kernel code. Although get_user_pages does check to ensure the validity of the user addresses and will return an error if the user handed you a bad pointer.)

如果您想要一个友好的界面来分散/收集,您也可以考虑使用 _sg 函数.然后你会调用 dma_map_sg 而不是 dma_map_pagedma_sync_sg_for_cpu 而不是 dma_sync_single_for_cpu

You can also consider using the _sg functions if you want a friendly interface to scatter/gather. Then you would call dma_map_sg instead of dma_map_page, dma_sync_sg_for_cpu instead of dma_sync_single_for_cpu, etc.

另请注意,其中许多功能在您的平台上可能或多或少是无操作的,因此您通常可以避免草率.(特别是 dma_sync_... 和 dma_unmap_... 在我的 x86_64 系统上什么都不做.)但在这些平台上,调用本身被编译为空,所以没有任何理由草率.

Also note that many of these functions may be more-or-less no-ops on your platform, so you can often get away with being sloppy. (In particular, dma_sync_... and dma_unmap_... do nothing on my x86_64 system.) But on those platforms, the calls themselves get compiled into nothing, so there is no excuse for being sloppy.

这篇关于将 DMA 缓冲区映射到用户空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆