SQL上的性能问题 [英] Performance Issue On SQL

查看:111
本文介绍了SQL上的性能问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述





我需要将父表格复制到另一个子表格中。

其中父表包含大量记录即大约22224171条记录。



我需要解压复制时间。

i已经将索引应用于子表中的4列。



任何人都可以帮我解决这个性能问题..



提前致谢。



问候

Victor

Hi,

I need to copy a parent tabel into another child table.
where the parent table contains a lot of records i.e about 22224171 records.

I need to decress the copying time.
i have already applied index to 4 columns in the child table.

Could any one help me to solve this performance issue..

Thanks in advance.

Regards
Victor

推荐答案

将大量数据复制到表中是您要插入的数据与该表上的索引之间的平衡行为。看看关于堆栈溢出的这个问题...



http://stackoverflow.com/questions/6955456/drop-rebuild-indexes-during-bulk-insert [ ^ ]



插入过程中维护索引会产生开销,插入后重建索引会产生开销。明确确定哪种方法产生的开销较少的唯一方法就是尝试它们并对它们进行基准测试。



你肯定想把插件批量化为更小的块,如果你只是执行一个巨大的INSERT语句,它将全部由一个事务处理,这将导致大量的tlog增长并不会是最快的。



我发现这种模式适用于大量数据....(这将是一个执行ETL的SSIS包)。



*除了聚集索引之外,删除目标表*上的所有索引*

*在目标表上开始批量插入操作 - (例如,执行此操作)分批1000条记录)。

*在目的地表上重建索引



完全取决于聚集索引。我记得在销售数据上做了一些工作,数据按财务日期而不是仅仅是自动增量字段进行聚类。在这种情况下,按照聚集索引的顺序将数据插入desintation表非常重要(例如,在尝试插入源数据之前对其进行排序)
Copy large amounts of data into a table is a balancing act between the data you are inserting and the indexes on that table. Have a look at this question on stack overflow...

http://stackoverflow.com/questions/6955456/drop-rebuild-indexes-during-bulk-insert[^]

"There is overhead in maintaing indexes during the insert and there is overhead in rebuilding the indexes after the insert. The only way to definitively determine which method incurs less overhead is to try them both and benchmark them. "

You''ll definitely want to batch up the inserts into smaller chunks, if you''re just performing one giant INSERT statement it''s all going to be handled by a single transaction which will cause large tlog growth and wont be the fastest.

I have found this pattern works with large amounts of data....(this would be an SSIS package to perform the ETL).

* Drop all indexes on destination table *apart from clustered index*
* Begin bulk insert operation on destination table - (e.g do this in batches of 1000 records).
* Rebuild indexes on destination table

It totally depends on the clustered index as well. I remember doing some work on sales data, and the data was clustered by financial date rather than just an autoincrementing field. In this case, it was important to insert the data into the desintation table in the order of the clustered index (e.g. have your source data sorted before trying to insert it)


这篇关于SQL上的性能问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆