并发I/O-缓冲损坏,阻止设备驱动程序 [英] concurrent I/O - buffers corruption, block device driver

查看:81
本文介绍了并发I/O-缓冲损坏,阻止设备驱动程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发块分层设备驱动程序.因此,我拦截了 WRITE 请求并加密了数据,并在end_bio()例程中解密了数据(在处理和 READ 请求期间). 因此,所有功能都可以在单个流中正常运行.但是,如果试图同时从两个或多个进程执行I/O,我将缓冲内容损坏.我没有用于缓冲区的本地存储.

I developing block layered device driver. So, I intercept WRITE request and encrypt data, and decrypt data in the end_bio() routine (during processing and READ request). So all works fine in single stream. But I getting buffers content corruption if have tried to performs I/O from two and more processes simultaneously. I have not any local storage for buffers.

我是否需要将BIO合并到我的驱动程序中?

Linux I/O子系统是否有一些与大量并发I/O请求相关的要求?

是否有一些与堆栈使用或编译相关的提示和技巧?

这是在4.15内核下.

This is under kernel 4.15.

当时我使用下一个限制条件来运行磁盘扇区:

At the time I use next constriction to run over disk sectors:

    /*
     * A portion of the bio_copy_data() ...
     */
    for (vcnt = 0, src_iter = src->bi_iter; ; vcnt++)
        {
        if ( !src_iter.bi_size)
            {
            if ( !(src = src->bi_next) )
                break;

            src_iter = src->bi_iter;
            }

        src_bv = bio_iter_iovec(src, src_iter);

        src_p = bv_page = kmap_atomic(src_bv.bv_page);
        src_p += src_bv.bv_offset;

        nlbn    = src_bv.bv_len512;
        for ( ; nlbn--; lbn++ , src_p += 512 )
                {
                {
                /* Simulate a processing of data in the I/O buffer */
               char *srcp = src_p, *dstp = src_p;
               int  count = DUDRV$K_SECTORSZ;

               while ( count--)
                {
                *(dstp++) = ~ (*(srcp++));
                }

                }
                }
        kunmap_atomic(bv_page);
        **bio_advance_iter**(src, &src_iter, src_bv.bv_len);
        }

这是正确的吗?还是我需要使用诸如** bio_for_each_segment(bvl,bio,iter)**之类的东西?

Is this correct ? Or I'm need to use something like **bio_for_each_segment(bvl, bio, iter) ** ?

推荐答案

问题的根源是Block I/O方法的功能".特别是(请参阅Linex网站参考的说明)

The root of the problem is a "feature" of the Block I/O methods. In particularly (see description at Linex site reference )

** Biovecs 可以在多个BIOS之间共享-bvec iter可以代表 现有生物载体的任意范围,中途开始和结束 通过生物媒介.这就是有效分割任意对象的原因 BIOS.请注意,这意味着我们使用bi_size来确定何时 到达了bio的末尾,而不是bi_vcnt-并且bio_iovec()宏占用了 构建生物膜时要考虑到bi_size.*

** Biovecs can be shared between multiple bios - a bvec iter can represent an arbitrary range of an existing biovec, both starting and ending midway through biovecs. This is what enables efficient splitting of arbitrary bios. Note that this means we only use bi_size to determine when we've reached the end of a bio, not bi_vcnt - and the bio_iovec() macro takes bi_size into account when constructing biovecs.*

因此,在我的情况下,这是导致磁盘扇区溢出的缓冲区的原因.

在将BIO发送到支持的设备驱动程序之前,先在 .bi_opf 中设置 REQ_NOMERGE_FLAGS .

The trick is set REQ_NOMERGE_FLAGS in the .bi_opf before sending BIO to backed device driver.

第二个原因是备份的设备驱动程序返回了不实际的 .bi_iter .因此,我们需要保存它(在向后端提交BIO请求之前)并将其恢复到我们的" bio_endio()"例程中.

The second reason is the non-actual .bi_iter is returned by backed device driver. So, we need to save it (before submiting BIO request to backend) and restore it in the our "bio_endio()" routine.

这篇关于并发I/O-缓冲损坏,阻止设备驱动程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆