什么是附加在高负载.ashx的HTTP处理程序记录到磁盘文件的速度最快,最安全的方法是什么? [英] What is the fastest and safest way to append records to disk file in highly loaded .ashx http handler?

查看:161
本文介绍了什么是附加在高负载.ashx的HTTP处理程序记录到磁盘文件的速度最快,最安全的方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

什么是写入(追加)记录高度并行的网络环境.NET4 IIS7到文件的最佳选择?我用ashx的HTTP处理程序来接收应写入快速文件数据的一小部分。首先,我用:

What is the best option for writing (appending) records to file in highly parallel web environment in .net4 IIS7? I use ashx http handler to receive small portions of data that should be written to file quickly. First I used:

    using (var stream = new FileStream(fileName, FileMode.Append, FileAccess.Write, FileShare.ReadWrite, 8192))
    {
        stream.Write(buffer, 0, buffer.Length);
    } 

不过,我注意到一些记录是因为FileShare.ReadWrite的可能破损或不完整的。接下来,我想它恰克到FileShare.Read。目前在没有破纪录的话,但不时我得到这个异​​常信息:System.IO.IOException:该进程无法访问该文件...,因为它正由另一个进程使用

But I noticed that some records were broken or incomplete, probably because of FileShare.ReadWrite. Next I tried to chage it to FileShare.Read. There where no broken records then, but from time to time I got this exception: System.IO.IOException: The process cannot access the file ... because it is being used by another process.

我非常希望操作系统排队并发写请求,使所有的记录将被写入最终。我应该使用什么样的文件访问API?

Ideally I would like the operating system to queue concurrent write requests so that all the records would be written eventually. What file access API should I use?

推荐答案

有两种选择,根据不同的尺寸。如果尺寸小,的也许的最好的选择是简单地通过一些共享锁同步对文件的访问。如果可能的话,它会的的是一个好主意,让文件打开(冲洗偶尔),而不是不断的开/关。例如:

there are two options, depending on the size. If the size is small, probably the best option is to simply synchronize access to the file by some shared lock. If possible, it would also be a good idea to keep the file open (flushing occasionally), rather than constantly open/close. For example:

class MeaningfulName : IDisposable {
    FileStream file;
    readonly object syncLock = new object();
    public MeaningfulName(string path) {
        file =  new FileStream(fileName, FileMode.Append, FileAccess.Write,
           FileShare.ReadWrite, 8192);
    }
    public void Dispose() {
        if(file != null) {
           file.Dispose();
           file = null;
        }
    }
    public void Append(byte[] buffer) {
        if(file == null) throw new ObjectDisposedException(GetType().Name);
        lock(syncLock) { // only 1 thread can be appending at a time
            file.Write(buffer, 0, buffer.Length);
            file.Flush();
        }
    }
}

这是线程安全的,并且可以提供给所有的ASHX没有问题。

That is thread-safe, and could be made available to all the ashx without issue.

不过,对于更大的数据,你可能想看看同步读写器队列 - 即所有的作家(ASHX命中)可以抛出数据到队列中,有一个专用的作家线程出列他们和追加。这将删除的ashx的IO时间,但是你可能要封顶的情况下队列大小作家跟不上。有一个样本这里皑皑的同步读/写器队列的。

However, for larger data, you might want to look at a synchronized reader-writer queue - i.e. all the writers (ashx hits) can throw data onto the queue, with a single dedicated writer thread dequeuing them and appending. That removes the IO time from the ashx, however you might want to cap the queue size in case the writer can't keep up. There's a sample here of a capped synchronized reader/writer queue.

这篇关于什么是附加在高负载.ashx的HTTP处理程序记录到磁盘文件的速度最快,最安全的方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆