任何人都可以帮助加快我的SQL批量插入?它减慢了表格越大...... [英] Can anyone help speed up my SQL Bulk Insert? It slows down the bigger the table gets...

查看:75
本文介绍了任何人都可以帮助加快我的SQL批量插入?它减慢了表格越大......的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,



我想知道是否有人可以就使用C#SqlBulkCopy类提出任何建议。我一直在谷歌搜索这个问题,但我认为我的需求非常具体。



基本上,我正在执行从旧的Paradox DB到SQL Server 2008的数据库转换。我正在使用OdbcReader和SqlBulkCopy来传输数据。这工作正常,除了插入的行越多,插入速率急剧下降。首先,该过程每秒加载000行,但是当我们达到200,000行时,它的插入速度低于每秒100行。



现阶段桌面上根本没有索引。



有什么我能做的吗是否有更好的技巧来实现这个目标?



下面的代码块。



问候,

Martin。



Hi All,

I was wondering if anyone could give any advice on using the C# SqlBulkCopy class. I have been googling this problem but I think my needs are quite specific.

Basically, I'm performing a database conversion from an old Paradox DB to SQL Server 2008. I am using an OdbcReader and SqlBulkCopy to transfer data. This is working fine, except that the more rows that are inserted, the rate of insert descreases dramatically. To begin with, the process is loading 000s of rows a second, but by the time we get to 200,000 rows for example, it's inserting less than 100 per second.

There are no indexes at all on the table at this stage.

Is there something I can flush, or is there a better technique to achieve this?

Code block below.

Regards,
Martin.

public void BulkLoad(String TableName, OdbcDataReader Reader) {
            System.Data.SqlClient.SqlBulkCopy bulkCopy = new System.Data.SqlClient.SqlBulkCopy(ConnectionString, SqlBulkCopyOptions.TableLock);
            bulkCopy.DestinationTableName = TableName;
            //bulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(OnSqlRowsCopied);
            //bulkCopy.NotifyAfter = 10;
            bulkCopy.BatchSize = 5000;
            bulkCopy.BulkCopyTimeout = 0;

            try {
                bulkCopy.WriteToServer(Reader);
            }
            catch (Exception ex) {
                throw ex;
            }
            finally {
                bulkCopy.Close();
                Reader.Close();
            }
        }

推荐答案

真的不是答案,但我认为这可能与Paradox结束时读者的速度。我设法解决这个问题的唯一方法是将Paradox表预先以编程方式分割成100,000行表。



这至少消除了指数减速。一个500,000行表现在大约需要15分钟而不是一个小时,一百万行表现在需要30分钟而不是近4小时。



这些较小的分裂,在阅读器填充时有一个暂停,然后行很快地传输。对于更大的体积,转移开始直接进行,所以我想也许加载是用慢速悖论阅读器追赶,导致减速。



这不是一个真正的答案,但数据库传输现在至少要快得多。
Not an answer really, but I think it might be something to do with the speed of the Reader at the Paradox end. The only way I've managed to get around this is to split the Paradox tables up programmatically beforehand into 100,000 row tables.

This atleast removes the exponential slowdown. A 500,000 row table is now taking about 15 mins rather than an hour, and a million row table is now taking 30 minutes as opposed to nearly 4 hours.

For these smaller splits, there is a pause whilst the reader populates, and then the rows are transferred really quickly. For the larger bulk, the transfer begins straight away, so I'm thinking that perhaps the loading is 'catching-up' with the slow paradox reader, causing the slowdown.

Not really an answer as such, but the database transfer are atleast a lot quicker now.


这篇关于任何人都可以帮助加快我的SQL批量插入?它减慢了表格越大......的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆