为什么在QueryAnalyzer中执行storedprocedure会更快地将此SP作为预定作业执行? [英] why is execution of a storedprocedure in QueryAnalyzer faster thanexecuting this SP as scheduled job?

查看:55
本文介绍了为什么在QueryAnalyzer中执行storedprocedure会更快地将此SP作为预定作业执行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你好NG!


我写了一个存储过程,首先删除了100.000行然后

插入了100.000个新行。




查询分析器中执行此SP(运行时间约为2分钟)与将此SP安排为

之间存在巨大差异工作(运行时间> 30分钟!!!)。


为什么?问题是什么?

我怎样才能改善工作-runtime?

INSERT-Statement从连接和分组的SELECT中获取数据 -

这里是代码(示范):

INSERT INTO [表1]

(Field1,Field2,...)

SELECT DISTINCT Field1,Count(field2),(Count(field2)*( - 1))

FROM Table2 T2

INNER JOIN表3 T3 ON T2 。[ID] = T3。[ID]

INNER JOIN表4 T4 ON T2。[DID] = T4。[ID]

INNER JOIN表5 T5 ON T2。[ SID] = T5。[ID]

GROUP BY field1,field2 ...

我注意到一些奇怪的事情:执行SP之后

queryanalyzer输出1行受影响。逐行 - 100.000

次......显然每一行都是逐个插入的。


有什么想法吗?


用SELECT INTO创建一个临时表,然后

只是将这些数据(没有聚合)插入到目标表中会有帮助吗?

非常感谢任何帮助,tia和最好的问候


---

Heiko

解决方案
奇怪。如果您有多个受影响的1行,则表示您正在执行多个语句。每个正在执行的DML(INSERT,UPDATE,DELETE或SELECT)语句都会收到一条这样的消息。所以听起来好像你有一个某种循环(可能是游标)或者其他一些DML

语句。我会把这看作是第一步。


无论如何,它确实是受影响的行导致QA和Agent之间差异的消息。

代理对每个此类DONE IN PROC都有延迟。信号。您可以通过在proc代码的开头添加SET

NOCOUNT ON来降低效果。如果仍有显着差异,请尝试使用

a CmdExec jobstep,而不是使用OSQL.EXE执行proc。


-

Tibor Karaszi,SQL Server MVP
http:// www.karaszi.com/sqlserver/default.asp
http:// www.solidqualitylearning.com/
http://www.sqlug.se /

" Heiko Pliefke" < SL *********** @ freenet.de>在消息新闻中写道:1104416731.799992@nbgm66x ...

嗨NG!

我写了一个存储过程,首先删除了100.000行,然后插入了100.000个新行。 />
在查询分析器中执行此SP(运行时约2分钟)与将此SP作为作业安排(运行时间> 30分钟!!!)之间存在巨大差异。

为什么?问题是什么?
我怎样才能改善工作-runtime?

INSERT-Statement从连接和分组的SELECT中获取数据 - 这是代码
(示例):

INSERT INTO [表1]
(Field1,Field2,...)
SELECT DISTINCT Field1,Count(field2),(Count(field2)* (-1))
FROM表2 T2
INNER JOIN表3 T3 ON T2。[ID] = T3。[ID]
INNER JOIN表4 T4 ON T2。[DID] = T4。[ ID]
INNER JOIN Table5 T5 ON T2。[SID] = T5。[ID]
GROUP BY field1,field2 ...

我注意到一些奇怪的事情:执行后SP查询分析器输出1行受影响。逐行 - 100.000次......显然每一行都是逐个插入的。

任何想法?

使用SELECT INTO创建临时表会有帮助吗,然后只是将这些数据(没有聚合)插入到目标表中?

非常感谢任何帮助,tia和最好的问候

- -
Heiko



>我注意到一些奇怪的事情:在执行SP之后,queryanalyzer

输出1行受影响。逐行 - 100.000次......显然
每一行都是逐个插入的。


看来你在桌子上有一个INSERT触发器,它遍历了

插入的表格。


To提高性能,在触发器开始时添加SET NOCOUNT ON

和/或proc来抑制DONE_IN_PROC消息。另外,考虑修改

触发器代码以使用基于集合的技术。


-

祝你节日快乐

Dan Guzman

SQL Server MVP

" Heiko Pliefke" < SL *********** @ freenet.de>在消息中写道

news:1104416731.799992@nbgm66x ...嗨NG!

我写了一个存储过程,首先删除了100.000行,然后
插入100.000个新的在查询分析器中执行此SP之间存在巨大差异(运行时间约为2分钟),并将此SP作为作业进行调度(运行时>
30分钟!!!)。

为什么?问题是什么?
我怎样才能改善工作-runtime?

INSERT-Statement从连接和分组的SELECT中获取数据 - 这里
是代码(示例):

INSERT INTO [表1]
(Field1,Field2,...)
SELECT DISTINCT Field1,Count(field2),(Count(field2)* (-1))
FROM表2 T2
INNER JOIN表3 T3 ON T2。[ID] = T3。[ID]
INNER JOIN表4 T4 ON T2。[DID] = T4。[ ID]
INNER JOIN Table5 T5 ON T2。[SID] = T5。[ID]
GROUP BY field1,field2 ...

我注意到一些奇怪的事情:执行后SP查询分析器输出1行受影响。逐行 - 100.000次......显然
每一行都是逐个插入的。

任何想法?

创建一个临时的 - 是否有帮助用SELECT INTO表,然后只是将这些数据(没有聚合)插入到目标表中?

非常感谢任何帮助,tia和最好的问候

---
Heiko



>看来你在桌子上有一个INSERT触发器,它遍历插入的表格。


好​​抓,Dan!


-

Tibor Karaszi,SQL Server MVP
http: //www.karaszi.com/sqlserver/default.asp
http: //www.solidqualitylearning.com/
http://www.sqlug .se /

Dan Guzman <顾****** @ nospam-online.sbcglobal.net>在消息中写道

news:qp **************** @ newssvr12.news.prodigy.com ...

< blockquote class =post_quotes>我注意到一些奇怪的事情:在执行SP之后,queryanalyzer输出1 Row(s)
受影响。逐行 - 100.000次...显然每一行都是逐个插入的。



看来你在表上有一个INSERT触发器,它遍历插入的表。

要提高性能,请在触发器和/或proc的开头添加SET NOCOUNT ON以禁止显示DONE_IN_PROC消息。另外,考虑修改触发器代码以使用基于集合的技术。

-
节日快乐

Dan Guzman
SQL Server MVP

Heiko Pliefke < SL *********** @ freenet.de>在消息新闻中写道:1104416731.799992@nbgm66x ...

嗨NG!

我写了一个存储过程,首先删除了100.000行,然后插入了100.000个新行。 />
在查询分析器中执行此SP(运行时约2分钟)与将此SP作为作业安排(运行时间> 30分钟!!!)之间存在巨大差异。

为什么?问题是什么?
我怎样才能改善工作-runtime?

INSERT-Statement从连接和分组的SELECT中获取数据 - 这是代码
(示例):

INSERT INTO [表1]
(Field1,Field2,...)
SELECT DISTINCT Field1,Count(field2),(Count(field2)* (-1))
FROM表2 T2
INNER JOIN表3 T3 ON T2。[ID] = T3。[ID]
INNER JOIN表4 T4 ON T2。[DID] = T4。[ ID]
INNER JOIN Table5 T5 ON T2。[SID] = T5。[ID]
GROUP BY field1,field2 ...

我注意到一些奇怪的事情:执行后SP查询分析器输出1行受影响。逐行 - 100.000次......显然每一行都是逐个插入的。

任何想法?

使用SELECT INTO创建临时表会有帮助吗,然后只是将这些数据(没有聚合)插入到目标表中?

非常感谢任何帮助,tia和最好的问候

- -
Heiko




Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then
inserts 100.000 new rows.

There is a huge difference between executing this SP in the
query-analyzer (runtime approx. 2 minutes), and scheduling this SP as an
Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT -
here is the code (exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the
queryanalyzer outputs "1 Row(s) affected." line by line - 100.000
times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then
just INSERT this data (w/o aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko

解决方案

Strange. If you have several "1 rows affected", you are executing several statements. You get one
such message per DML (INSERT, UPDATE, DELETE or SELECT) statement you are executing. So it sounds to
like you either have a loop of some kind (cursor perhaps) or in some other way a whole bunch of DML
statements. I would look into this as the first step.

Anyhow, it is indeed the " rows affected" messages that causes the difference between QA and Agent.
Agent has a delay for each such "DONE IN PROC" signal. You can lesser the effect by adding SET
NOCOUNT ON in the beginning of the proc code. If there still is a significant difference, try using
a CmdExec jobstep instead from where you execute the proc using OSQL.EXE.

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104416731.799992@nbgm66x...

Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer (runtime approx. 2
minutes), and scheduling this SP as an Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here is the code
(exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just INSERT this data (w/o
aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko



> I have noticed something strange: After executing the SP the queryanalyzer

outputs "1 Row(s) affected." line by line - 100.000 times... Obviously
every line is inserted one by one.
It appears you have an INSERT trigger on the table that iterates over the
inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger
and/or proc to suppress DONE_IN_PROC messages. Also, consider revising the
trigger code to use a set-based technique.

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message
news:1104416731.799992@nbgm66x... Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then
inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer
(runtime approx. 2 minutes), and scheduling this SP as an Job (runtime >
30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here
is the code (exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer
outputs "1 Row(s) affected." line by line - 100.000 times... Obviously
every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just
INSERT this data (w/o aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko



> It appears you have an INSERT trigger on the table that iterates over the inserted table.

Good catch, Dan!

--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Dan Guzman" <gu******@nospam-online.sbcglobal.net> wrote in message
news:qp****************@newssvr12.news.prodigy.com ...

I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.



It appears you have an INSERT trigger on the table that iterates over the inserted table.

To improve performance, add SET NOCOUNT ON at the beginning of the trigger and/or proc to suppress
DONE_IN_PROC messages. Also, consider revising the trigger code to use a set-based technique.

--
Happy Holidays

Dan Guzman
SQL Server MVP

"Heiko Pliefke" <sl***********@freenet.de> wrote in message news:1104416731.799992@nbgm66x...

Hi NG!

I wrote a stored procedure which at first deletes 100.000 rows and then inserts 100.000 new rows.

There is a huge difference between executing this SP in the query-analyzer (runtime approx. 2
minutes), and scheduling this SP as an Job (runtime > 30 minutes!!!).

Why? Whats the issue?
And how can i improve the "job"-runtime?

The INSERT-Statement gets the data from a joined and grouped SELECT - here is the code
(exemplary):
INSERT INTO [Table1]
(Field1, Field2,... )
SELECT DISTINCT Field1, Count(field2), (Count(field2) * (-1))
FROM Table2 T2
INNER JOIN Table3 T3 ON T2.[ID] = T3.[ID]
INNER JOIN Table4 T4 ON T2.[DID] = T4.[ID]
INNER JOIN Table5 T5 ON T2.[SID] = T5.[ID]
GROUP BY field1, field2...
I have noticed something strange: After executing the SP the queryanalyzer outputs "1 Row(s)
affected." line by line - 100.000 times... Obviously every line is inserted one by one.

Any idea?

Would it help, to create a temporary-table with SELECT INTO, and then just INSERT this data (w/o
aggregation) into the target table?
Any help is greatly appreciated, tia and best regards

---
Heiko




这篇关于为什么在QueryAnalyzer中执行storedprocedure会更快地将此SP作为预定作业执行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆