使用StreamWriter编写非常大的文件时出现问题。 [英] Problem when using a StreamWriter for writing very large files.

查看:388
本文介绍了使用StreamWriter编写非常大的文件时出现问题。的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在使用SQL Server CLR存储过程(使用.Net 3.5和C#编写) 将数据从表格提取到平面文件。 


CLR存储过程使用SQLDataReader来获取表数据。  StreamWriter用于将数据写入文件。 CLR存储过程具有"分割文件"的能力。 ..如果写入的行数超过提供的值,则关闭
StreamWriter(flush(),close(),dispose())并创建StreamWriter的新实例(创建要写入的新文件) 。这适用于大多数数据。


有一个表生成了一个非常大的平面文件(超过18gig)。 数据被分割为多个(每个文件100K行)文件(表格为900K +行)。 第一个100K(行)文件在5分钟内写入,但秒文件需要
20分钟,第三个需要50分钟。 似乎每个文件的写入时间大致相同,但每个新文件的性能都会下降。 任何人都可以提供这种行为的解决方案或原因吗? 


我们尝试将整个内容写入单个文件,花了近2天才完成。


谢谢!!

解决方案

您好,


如果可能,您可以共享当前代码,以便任何想要协助的人不必猜测到目前为止所做的工作。


We are using a SQL Server CLR Stored Procedure (written using .Net 3.5 and C#)  to extract data from tables to flat files. 

The CLR stored procedure uses a SQLDataReader to get the table data.  A StreamWriter is used to write the data to file. The CLR stored procedure has the ability to "split files" .. if the number of rows written exceeds a supplied value, the StreamWriter is closed (flush(),close(), dispose()) and new instance of the StreamWriter is created (creating a new file to write). This works fine for most of the data.

There is one table that generated a very large flat file (over 18gig).  The data is being split across multiples (100K rows per file) files (the table is 900K+ rows).  The first 100K(rows) file writes in 5 minutes, but the seconds file takes 20 minutes, the third takes 50 minutes.  It seems that each file should take about the same time to write, but the performance gets worse with each new file.  Can anyone offer a solution or reason for this behavior? 

We tried to write the entire contents to a single file and that took nearly 2 days to finish.

Thanks!!

解决方案

Hello,

If possible can you share the current code so for anyone who would like to assist do not have to guess at what has been done so far.


这篇关于使用StreamWriter编写非常大的文件时出现问题。的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆