ioctl vs netlink vs memmap在内核空间和用户空间之间进行通信 [英] ioctl vs netlink vs memmap to communicate between kernel space and user space

查看:340
本文介绍了ioctl vs netlink vs memmap在内核空间和用户空间之间进行通信的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当用户要求在Linux用户空间中使用命令时,将显示一些自定义硬件的统计信息.此实现当前使用PROC接口.我们开始添加更多的统计信息,然后遇到一个问题,由于PROC接口限制为1页,因此必须执行两次特定的统计命令才能获取全部数据.

Got some statistics information of our custom hardware to be displayed whenever user asks for using a command in the Linux user space. This implementation is currently uses PROC interface. We started adding more statistics information then we encountered a problem wherein the particular statistics command had to be executed twice for getting the entire data as PROC interface was restricted to 1 page.

如上所述,内核与用户空间之间的数据传输并不关键,但根据数据,用户可能会做出一些决定.我们对此接口设计的要求是,它应该能够传输可能大于8192字节的数据量,并且该命令需要使用最少的内核资源(例如锁等),并且需要快速.

As mentioned above the data transfer between the kernel and the user space is not critical but as per the data some decisions might be taken by the user. Our requirement for this interface design is that it should be capable of transferring amount of data maybe greater that 8192 bytes and the command needs to use minimal kernel resources (like locks etc.,) and it needs to be quick.

使用ioctl可以解决此问题,但是由于该命令不是完全控制设备而是要收集一些统计信息,因此不确定是否是按Linux使用的良好机制.我们目前正在使用3.4内核;不确定Netlink在此版本中是否有损(以前的版本我遇到了问题,例如队列已满,套接字开始丢弃数据). mmap是另一种选择.谁能建议我使用

Using ioctl can solve the issue but since the command is exactly not controlling the device but to collect some statistics information, not sure whether it is a good mechanism to use as per Linux. We are currently using 3.4 kernel; not sure whether Netlink is lossy in this version (Previous versions I came across issues like when the queue becomes full, socket starts to drop data). mmap is another option . Can anyone suggest me what would be the best interface to use

推荐答案

  • 内核服务可以通过Netlink直接向用户应用程序发送信息,而您已经使用ioctl函数显式轮询了内核,这是一个相对昂贵的操作.
  • Netlink通信非常异步,双方在另一端发送消息后的某个时刻接收消息. ioctls完全是同步的:内核,醒来!我需要您现在处理我的请求! OP!"
  • Netlink支持内核与多个用户空间进程之间的多播通信,而ioctl则是严格一对一的.

    • Kernel services can send information directly to user applications over Netlink, while you’d have explicitly poll the kernel with ioctl functions, a relatively expensive operation.
    • Netlink comms is very much asynchronous, with each side receiving messages at some point after the other side sends them. ioctls are purely synchronous: "Hey kernel, WAKE UP! I need you to process my request NOW! CHOP CHOP!"
    • Netlink supports multicast communications between the kernel and multiple user-space processes, while ioctls are strictly one-to-one.

      Netlink消息可能由于各种原因(例如内存不足)而丢失,而ioctl由于其即时处理的特性通常更可靠.

      Netlink messages can be lost for various reasons (e.g. out of memory), while ioctls are generally more reliable due to their immediate-processing nature.

      因此,如果您要求从用户空间(应用程序)向内核提供统计信息,则在您在内核空间中生成统计信息并且希望内核空间将这些数据发送至用户空间(应用程序)时,使用IOCTL更为可靠且易于使用.您必须使用Netlink套接字.

      So If you asking for statistics to kernel from user space(application) it is more reliable and easy to use IOCTL while if you generate statistics in kernel space and you want your kernel space to send those data to user space(application) you have to use Netlink sockets.

      这篇关于ioctl vs netlink vs memmap在内核空间和用户空间之间进行通信的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆