SqlBulkCopy 的推荐批量大小是多少? [英] What is the recommended batch size for SqlBulkCopy?

查看:27
本文介绍了SqlBulkCopy 的推荐批量大小是多少?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

SqlBulkCopy 的推荐批量大小是多少?我正在寻找一个通用公式,我可以将其用作性能调整的起点.

What is the recommended batch size for SqlBulkCopy? I'm looking for a general formula I can use as a starting point for performance tuning.

推荐答案

我有一个导入实用程序,它位于与我的 SQL Server 实例相同的物理服务器上.使用自定义 IDataReader,它解析平面文件并使用 SQLBulkCopy 将它们插入数据库.一个典型的文件有大约 6M 的合格行,平均 5 列十进制和短文本,每行大约 30 个字节.

I have an import utility sitting on the same physical server as my SQL Server instance. Using a custom IDataReader, it parses flat files and inserts them into a database using SQLBulkCopy. A typical file has about 6M qualified rows, averaging 5 columns of decimal and short text, about 30 bytes per row.

在这种情况下,我发现 5,000 的批量大小是速度和内存消耗的最佳折衷方案.我从 500 开始,然后尝试更大.我发现 5000 平均比 500 快 2.5 倍.插入 600 万行大约需要 30 秒,批量大小为 5,000,批量大小为 500 大约需要 80 秒.

Given this scenario, I found a batch size of 5,000 to be the best compromise of speed and memory consumption. I started with 500 and experimented with larger. I found 5000 to be 2.5x faster, on average, than 500. Inserting the 6 million rows takes about 30 seconds with a batch size of 5,000 and about 80 seconds with batch size of 500.

10,000 并没有明显更快.提高到 50,000 将速度提高了几个百分点,但不值得增加服务器上的负载.超过 50,000 则显示速度没有提高.

10,000 was not measurably faster. Moving up to 50,000 improved the speed by a few percentage points but it's not worth the increased load on the server. Above 50,000 showed no improvements in speed.

这不是一个公式,而是您可以使用的另一个数据点.

This isn't a formula, but it's another data point for you to use.

这篇关于SqlBulkCopy 的推荐批量大小是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆