在SQL Server中不合并最多Upsert 5000+行的最有效/最佳实践是什么? [英] What is the most efficient / best practise to Upsert 5000+ rows without Merge in SQL Server?

查看:99
本文介绍了在SQL Server中不合并最多Upsert 5000+行的最有效/最佳实践是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Web应用程序,该应用程序每秒接收约50个匹配,并且每次匹配我都会在中央SQL Server数据库中增加10条记录.每3秒大约为一次入站连接增加5000多个行.

I have a web application which receives about 50 hits per second, and on each hit I am upsert'ing around 10 records in a central SQL Server database. Roughly once every 3 seconds I am upserting 5000+ rows for a single inbound connection.

当前,我有一个将XML作为参数的存储过程.我从行字段不匹配的XML插入到主表中,然后使用XML中的值更新整个表.

Currently I have a stored procedure which takes XML as a parameter. I do an INSERT into my main table from my XML where a row field doesn't match, then update the whole table with values from my XML.

操作绝不慢,但是我真的很想知道执行此操作的最佳方法.我在SQL Server 2005上运行,所以没有MERGE操作.

The operation isn't slow by any means, but I really would like to know the best way to do this. I am running on SQL Server 2005 so I don't have the MERGE operation.

推荐答案

我会先执行UPDATE,否则您将更新刚插入的行

I would do the UPDATE first otherwise you'll update the rows you've just inserted

SELECT .. INTO #temp FROM (shredXML)

BEGIN TRAN

UPDATE ... FROM WHERE (matches using #temp)

INSERT ... SELECT ... FROM #temp WHERE NOT EXISTS

COMMIT

我还将考虑将XML更改为临时表并使用SQLBulkCopy.我们发现此方法比解析XML几百行要有效得多.如果您不能更改此设置,那么您是否首先将XML切入临时表中?

I'd also consider changing the XML to a temp table and use SQLBulkCopy. We've found this to be more efficient then parsing XML generally for more than a few hundred rows. If you can't change this then do you shred the XML into a temp table first?

这篇关于在SQL Server中不合并最多Upsert 5000+行的最有效/最佳实践是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆