在Oracle中通过脚本插入的更快方法? [英] Faster way to Insert, via script, in Oracle?

查看:89
本文介绍了在Oracle中通过脚本插入的更快方法?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用C#.NET 4.0应用程序,该应用程序将ODP.NET 11.2.0.2.0与Oracle 11g数据库一起使用.该应用程序预加载了一些带有数据的查找表,并且由于大多数表的记录少于20条,因此脚本运行得非常快.但是,其中一个脚本具有802条记录,并且要花费248.671秒才能插入记录,这对于使用少量广告发布大量数据的快速操作的数据库来说似乎太少了.

I'm working with a C# .NET 4.0 application, that uses ODP.NET 11.2.0.2.0 with an Oracle 11g database. The application pre-loads a few look-up tables with data, and since most have less than 20 records, the scripts run pretty quickly. However, one of the scripts has 802 records, and takes 248.671 seconds to insert the records, which seems excessive for such a small amount of data with a database that advertises fast operations with large volumes of data.

所以我想知道,是否有比通过当前脚本编写方法更快的通过脚本插入数据的方法?

So I'm wondering, is there a faster way to insert data, via script, than the current way the script is written?

要插入的表的定义如下:

The table being inserted into is defined like so:

CREATE TABLE FileIds
(
     Id                 NUMERIC(38)                         NOT NULL
    ,Name               NVARCHAR2(50)   DEFAULT 'Unknown'   NOT NULL 
    ,FileTypeGroupId    NUMERIC(38)                         NOT NULL
    ,CONSTRAINT FK_FileIds_FileTypeGroups FOREIGN KEY ( FileTypeGroupId ) REFERENCES FileTypeGroups ( Id )
)

要插入的脚本如下所示:

And the script to insert looks like the following:

BEGIN
    INSERT ALL
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1152,5)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1197,10)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1200,6)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1143,3)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1189,9)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1109,7)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1166,4)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (0,8)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1149,2)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1400,1)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1330,11)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1000,0)
        -- 790 Records removed for example purposes.
        SELECT * FROM DUAL;
    COMMIT;
END;

外键中引用的FileTypeGroups表在加载FileIds表之前已预先加载.没有与FileIds表关联的序列或触发器,并且到目前为止尚未为该表创建索引.

The FileTypeGroups table, referenced in the Foreign Key, is pre-loaded prior to the loading of the FileIds table. There are no sequences or triggers associated with the FileIds table, and as of yet indexes have not been created for the table.

推荐答案

问题

某些类型的语句(尤其是INSERT ALL)的解析时间可能成倍增加.例如:

Parsing time may increase exponentially with certain types of statements, especially INSERT ALL. For example:

--Clear any cached statements, so we can consistently reproduce the problem.
alter system flush shared_pool;
alter session set sql_trace = true;

--100 rows
INSERT ALL
    INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
    ...
    repeat 100 times
    ...
select * from dual;

--500 rows
INSERT ALL
    INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
    ...
    repeat 500 times
    ...
select * from dual;

alter session set sql_trace = false;

通过tkprof运行跟踪文件,您可以看到大量行的解析时间显着增加.例如:

Run the trace file through tkprof, and you can see the Parse time increases dramatically for a large number of rows. For example:

100行:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.06       0.05          0          1          0           0
Execute      1      0.00       0.00          0        100        303         100
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.06       0.05          0        101        303         100

500行:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     14.72      14.55          0          0          0           0
Execute      1      0.01       0.02          0        502       1518         500
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     14.74      14.58          0        502       1518         500

解决方案

  1. 将大型语句分解为几个较小的语句.很难找到最佳尺寸.在某些版本的Oracle上,存在数量众多的行会导致此问题.我通常走大约100行-足够获得分组语句的大多数好处,但又足够低以避免分析错误.或者...
  2. 尝试使用insert into ... select ... from dual union all ...方法.通常,它的运行速度要快得多,尽管它的解析性能也可能会随着大小的增加而大大降低.
  3. 升级Oracle.在较新的版本中,解析性能有所提高.我无法再在版本12.2中重现此问题.
  1. Break your large statement into several smaller statements. It's difficult to find the optimal size. On some versions of Oracle there's a magic number of rows that will cause the problem. I usually go for about 100 rows - enough to get most of the benefits of grouping statements, but low enough to avoid the parsing bug. OR...
  2. Try the insert into ... select ... from dual union all ... method instead. It usually runs much faster, although it's parsing performance may also degrade significantly with size.
  3. Upgrade Oracle. Parsing performance has improved in newer versions. I can no longer reproduce this issue in version 12.2.

警告

不要从中学到错误的教训.如果您担心SQL性能,那么最好在99%的时间里将相似的东西分组在一起,而不是将它们分开.您以正确的方式做事,只是遇到了一个奇怪的错误. (我搜索了My Oracle Support,但没有为此找到正式的错误.)

Don't learn the wrong lesson from this. If you're worried about SQL performance, 99% of the time you're better off grouping similar things together instead of splitting them apart. You're doing things the right way, you just ran into a weird bug. (I searched My Oracle Support but couldn't find an official bug for this.)

这篇关于在Oracle中通过脚本插入的更快方法?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆