OutOfMemoryException异常,当我读500MB的FileStream [英] OutOfMemoryException when I read 500MB FileStream

查看:160
本文介绍了OutOfMemoryException异常,当我读500MB的FileStream的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的文件流的读取大文件(> 500 MB),我得到了OutOfMemoryException异常。

有关它的任何解决方案。

我的code是:

 使用(VAR FS3 =新的FileStream(FILEPATH2,FileMode.Open,FileAccess.Read))
                {
                    字节[] = B2的readFully(FS3,1024);
                }
 公共静态的byte []的readFully(流流,诠释initialLength)
    {
        //如果我们传递了一个无用的初始长度,只
        //使用32K。
        如果(initialLength&。1)
        {
            initialLength = 32768;
        }        字节[]缓冲区=新的字节[initialLength]
        INT读= 0;        INT块;
        而((块= stream.Read(缓冲区,读,buffer.Length - 读))大于0)
        {
            读+ =块;            //如果我们已经达到了缓冲区的末尾,检查,看看是否有
            //任何更多的信息
            如果(阅读== buffer.Length)
            {
                INT nextByte = stream.ReadByte();                //流的末尾?如果是这样,我们就大功告成了
                如果(nextByte == -1)
                {
                    返回缓冲区;
                }                // 不。调整缓冲区,放在字节我们刚刚
                //读取,并继续
                字节[] = newBuffer新的字节[buffer.Length * 2];
                Array.Copy(缓冲,newBuffer,buffer.Length);
                newBuffer [阅读] =(字节)nextByte;
                缓冲= newBuffer;
                阅读++;
            }
        }
        //缓冲区现在是太大了。它的大小。
        字节[] = RET新的字节[阅读]
        Array.Copy(缓冲,RET,读取);
        返回RET;
    }


解决方案

在code你看,读取内存中的500MB文件的所有内容到一个连续区域。
你得到的内存外的一个条件,这并不奇怪。

解决的办法是,不这样做。

什么是你的真正的意欲何为?


如果你想完全读取文件,它比你使用的readFully方法要简单得多。试试这个:

 使用(VAR FS =新的FileStream(文件路径,FileMode.Open,FileAccess.Read))
{
   字节[]缓冲区=新的字节[fs.Length]
   INT读取动作= fs.Read(缓冲液,0,buffer.Length);
   //缓冲区现在包含该文件的全部内容
}

不过......使用这种code不会解决你的问题。它可能适用于一个500MB的文件。这将不是一个750MB的文件,或者1GB的文件工作。在某些时候你会达到你的系统上的内存限制,你将有你开始用相同的内存不足的错误。

的问题是,你正在试图在同一时间保持在存储器中的文件的全部内容。这通常是不必要的,因为文件增长的大小是注定要失败的。这是没有问题,当文件大小为16K。在500MB,这是错误的做法。

这就是为什么我问了几次,什么是你真正想要做的的?


听起来像是你想要的文件的内容发送到ASPNET响应流。这是个问题。而不是如何读一个500MB的文件到内存?但是,如何将大文件发送到ASPNET响应流?

有关这一点,再次,这是相当简单的。

  //发出文件的内容到ASPNET响应流
使用(VAR FS =新的FileStream(文件路径,FileMode.Open,FileAccess.Read))
{
   Response.BufferOutput = FALSE; //为prevent缓冲
   字节[]缓冲区=新的字节[1024];
   INT读取动作= 0;
   而((读取动作= fs.Read(缓冲液,0,buffer.Length))大于0)
   {
       Response.OutputStream.Write(缓冲液,0,读取动作);
   }
}

它被反复从文件中读取一个数据块,并编写块响应流,直到有没有更多的文件中读什么。这就是由流IO的意思。该数据通过你的逻辑通过,但从未举行过在同一个地方,就如同水流穿过水闸。在这个例子中,从未有1K多一次(当然,不举行在内存中的文件数据的的应用code,反正,有堆栈中的下等IO缓冲区。 )

这是在流IO的通用模式。了解它,使用它。

的一招抽数据到ASPNET的Response.OutputStream是设置 BufferOutput = FALSE 。默认情况下,ASPNET试图缓冲输出。在此情况下(500MB文件),缓冲是一个坏主意。从试图发送的第一个字节之前缓冲所有文件数据设置 BufferOutput 属性设置为false将prevent ASPNET。使用时,你知道你要发送的文件是非常大的。这些数据仍然会得到正确发送到浏览器。

即使这并不是完整的解决方案。你需要设置响应头等等。我想你是知道的,虽然。

I'm using Filestream for read big file (> 500 MB) and I get the OutOfMemoryException.

Any solutions about it.

My Code is:

 using (var fs3 = new FileStream(filePath2, FileMode.Open, FileAccess.Read))
                {
                    byte[] b2 = ReadFully(fs3, 1024);
                }


 public static byte[] ReadFully(Stream stream, int initialLength)
    {
        // If we've been passed an unhelpful initial length, just
        // use 32K.
        if (initialLength < 1)
        {
            initialLength = 32768;
        }

        byte[] buffer = new byte[initialLength];
        int read = 0;

        int chunk;
        while ((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0)
        {
            read += chunk;

            // If we've reached the end of our buffer, check to see if there's
            // any more information
            if (read == buffer.Length)
            {
                int nextByte = stream.ReadByte();

                // End of stream? If so, we're done
                if (nextByte == -1)
                {
                    return buffer;
                }

                // Nope. Resize the buffer, put in the byte we've just
                // read, and continue
                byte[] newBuffer = new byte[buffer.Length * 2];
                Array.Copy(buffer, newBuffer, buffer.Length);
                newBuffer[read] = (byte)nextByte;
                buffer = newBuffer;
                read++;
            }
        }
        // Buffer is now too big. Shrink it.
        byte[] ret = new byte[read];
        Array.Copy(buffer, ret, read);
        return ret;
    }

解决方案

The code you show, reads all content of the 500mb file into a contiguous region in memory. It's not surprising that you get an out-of-memory condition.

The solution is, "don't do that."

What are you really trying to do?


If you want to read a file completely, it's much simpler than the ReadFully method you use. Try this:

using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read)) 
{ 
   byte[] buffer = new byte[fs.Length];
   int bytesRead = fs.Read(buffer, 0, buffer.Length);
   // buffer now contains the entire contents of the file
} 

But... using this code won't solve your problem. It might work for a 500mb file. It won't work for a 750mb file, or a 1gb file. At some point you will reach the limit of memory on your system and you will have the same out-of-memory error you started with.

The problem is that you are trying to hold the entire contents of the file in memory at one time. This is usually unnecessary, and is doomed to failure as the files grow in size. It's no problem when the filesize is 16k. At 500mb, it's the wrong approach.

This is why I have asked several times, what are you really trying to do ?


Sounds like you want to send the contents of a file out to an ASPNET response stream. This is the question. Not "how to read a 500mb file into memory?" But "how to send a large file to the ASPNET Response stream?"

For this, once again, it's fairly simple.

// emit the contents of a file into the ASPNET Response stream
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read)) 
{ 
   Response.BufferOutput= false;   // to prevent buffering
   byte[] buffer = new byte[1024];
   int bytesRead = 0;
   while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) > 0) 
   {
       Response.OutputStream.Write(buffer, 0, bytesRead);
   }
} 

What it does is iteratively read a chunk from the file, and write that chunk to the Response stream, until there is nothing more to read in the file. This is what is meant by "streaming IO". The data passes through your logic, but is never held all in one place, just as a stream of water passes through a sluice. In this example, never is there more than 1k of file data in memory at one time (well, not held by your application code, anyway. There are other IO buffers lower in the stack.)

This is a common pattern in streamed IO. Learn it, use it.

The one trick when pumping data out to ASPNET's Response.OutputStream is to set BufferOutput = false. By default, ASPNET tries to buffer its output. In this case (500mb file), buffering is a bad idea. Setting the BufferOutput property to false will prevent ASPNET from attempting to buffer all the file data before sending the first byte. Use that when you know the file you're sending is very large. The data will still get sent to the browser correctly.

And even this isn't the complete solution. You'll need to set the response headers and so on. I guess you're aware of that, though.

这篇关于OutOfMemoryException异常,当我读500MB的FileStream的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆