什么是推荐的批量大小SqlBulkCopy的? [英] What is the recommended batch size for SqlBulkCopy?
问题描述
什么是推荐的批量大小 SqlBulkCopy的
?我在寻找一个通式我可以作为一个起点,性能优化使用。
What is the recommended batch size for SqlBulkCopy
? I'm looking for a general formula I can use as a starting point for performance tuning.
推荐答案
我有一个导入实用程序坐在同一物理服务器作为我的SQL Server实例上。使用自定义的IDataReader
,它解析平面文件,并使用 SqlBulkCopy的
它们插入到数据库中。一个典型的文件大约有600万符合条件的行,场均5列的十进制和简短的文字,每行30个字节。
I have an import utility sitting on the same physical server as my SQL Server instance. Using a custom IDataReader
, it parses flat files and inserts them into a database using SQLBulkCopy
. A typical file has about 6M qualified rows, averaging 5 columns of decimal and short text, about 30 bytes per row.
鉴于这种情况下,我找到了5000批次的大小是速度和内存消耗的最佳折衷。我开始与500和试验了较大。我发现5000是2.5倍更快,平均比500插入6百万行需要5000批量大小约30秒,并用500批次大小
Given this scenario, I found a batch size of 5,000 to be the best compromise of speed and memory consumption. I started with 500 and experimented with larger. I found 5000 to be 2.5x faster, on average, than 500. Inserting the 6 million rows takes about 30 seconds with a batch size of 5,000 and about 80 seconds with batch size of 500.
10,000没有可测量的速度更快。移动高达50,000提高了几个百分点的速度,但不值得服务器上的负载增加。超过50.000表现在速度上没有改善。
10,000 was not measurably faster. Moving up to 50,000 improved the speed by a few percentage points but it's not worth the increased load on the server. Above 50,000 showed no improvements in speed.
这是不是一个公式,但它的另一个数据点供您使用。
This isn't a formula, but it's another data point for you to use.
这篇关于什么是推荐的批量大小SqlBulkCopy的?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!