使用批量收集时为 LIMIT 设置值 [英] Setting a value for LIMIT while using bulk collect

查看:32
本文介绍了使用批量收集时为 LIMIT 设置值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道我们是否有任何技术可以计算需要为批量收集操作的 LIMIT 子句设置的值.例如下面,假设我们的游标有 1000 万条记录......我们可以为 LIMIT 子句设置什么值以获得最佳性能.有什么办法可以计算出来.

I wanted to know if we have any technique by which we can calculate the value which needed to be set for a LIMIT clause of bulk collect operation. For example below, lets say our cursor has 10 Million records..What is the value which we can set for LIMIT clause to have optimum performance. Is there any way we can calculate it.

decalre
cursor c_emp is <some select query>

var  <variable> ;

begin
     open c_emp;
       loop
           fetch c_emp bulk collect into var limit 2;
           exit when c_emp%NOTFOUND;
      end loop;
     close c_emp;
  end;

推荐答案

在 FOR LOOP 游标中使用隐式游标.它使代码更简单,默认值 100 几乎总是足够好.

Use an implicit cursor in a cursor FOR LOOP. It makes the code simpler and the default value of 100 is almost always good enough.

我看到很多人浪费了很多时间来担心这个问题.如果您考虑为什么批量收集可以提高性能,您就会明白为什么大量收集无济于事.

I've seen a lot of people waste a lot of time worrying about this. If you think about why bulk collect improves performance you will understand why large numbers won't help.

批量收集通过减少 SQL 和 PL/SQL 之间的上下文切换来提高性能.想象一下极不可能的最坏情况,其中上下文切换耗尽了所有运行时间.限制为 2 消除了 50% 的上下文切换;10消除90%;100 消除了 99%,等等.把它画出来,你会发现不值得找到最佳限制大小:

Bulk collect improves performance by reducing the context switches between SQL and PL/SQL. Imagine the highly-unlikely worst case scenario, where context switching uses up all the run time. A limit of 2 eliminates 50% of the context switches; 10 eliminates 90%; 100 eliminates 99%, etc. Plot it out and you'll realize it's not worth finding the optimal limit size:

使用默认值.把时间花在担心更重要的事情上.

Use the default. Spend your time worrying about more important things.

这篇关于使用批量收集时为 LIMIT 设置值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆