Oracle JDBC预取:如何避免用完RAM/如何使Oracle更快,高延迟 [英] Oracle JDBC prefetch: how to avoid running out of RAM/how to make oracle faster high latency

查看:118
本文介绍了Oracle JDBC预取:如何避免用完RAM/如何使Oracle更快,高延迟的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用Oracle Java JDBC(ojdbc14 10.2.x),加载具有很多行的查询将花费很多时间(高延迟环境.这显然是Oracle JDBC中的默认预取是默认大小"10",这需要每次往返时间一次) 10行.我正在尝试设置积极的预取大小来避免这种情况.

Using Oracle java JDBC (ojdbc14 10.2.x), loading a query with many rows takes forever (high latency environment. This is apparently the default prefetch in Oracle JDBC is default size "10" which requires a round trip time once per 10 rows. I am attempting to set an aggressive prefetch size to avoid this.

 PreparedStatement stmt = conn.prepareStatement("select * from tablename");
 statement.setFetchSize(10000);
 ResultSet rs = statement.executeQuery();

这可以工作,但是我遇到了内存不足的异常.我以为setFetchSize会告诉它缓冲输入的那么多行",并使用每行所需的RAM.如果我使用50个线程运行,即使使用16G的-XMX空间,它也会耗尽内存.感觉几乎像是泄漏:

This can work, but instead I get an out of memory exception. I had presumed that setFetchSize would tell it to buffer "that many rows" as they come in, using as much RAM as each row requires. If I run with 50 threads, even with 16G of -XMX space, it runs out of memory. Feels almost like a leak:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.newArray(Native Method)
    at java.lang.reflect.Array.newInstance(Array.java:70)
    at oracle.jdbc.driver.BufferCache.get(BufferCache.java:226)
    at oracle.jdbc.driver.PhysicalConnection.getCharBuffer(PhysicalConnection.java:7422)
    at oracle.jdbc.driver.OracleStatement.prepareAccessors(OracleStatement.java:983)
    at oracle.jdbc.driver.T4CTTIdcb.receiveCommon(T4CTTIdcb.java:273)
    at oracle.jdbc.driver.T4CTTIdcb.receive(T4CTTIdcb.java:144)
    at oracle.jdbc.driver.T4C8Oall.readDCB(T4C8Oall.java:771)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:346)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:205)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:861)
    at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3449)
    at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3493)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
    ....

我该怎么做才能仍然进行预取,但又不会用完RAM?发生了什么事?

What can I do to still get prefetch but not run out of RAM? What is going on?

SO上最接近的相关项是这样的: https://stackoverflow.com/a/14317881/32453

The closest related item on SO is this: https://stackoverflow.com/a/14317881/32453

推荐答案

基本上,对于最近的ojdbc jar,oracle的默认策略是预分配".每个预取"数组可以容纳从查询中返回的最大大小的行.对于所有行.因此,在我的情况下,我那里有一些VARCHAR2(4000),并且50个线程(语句)* 3列varchar2的* 4000用setFetchSize为几百[yikes]的内存加起来超过了千兆字节.似乎没有一个选项可以说不要预先分配该数组,而要使用它们进入时的大小". Ojdbc甚至将这些预分配的缓冲区保留在preparedstatements之间的 (缓存/连接)之间,以便可以重用它们.绝对是个记忆猪.

Basically, oracle's default strategy for recent ojdbc jars is to "pre allocate" an array per "prefetch" row that accommodates for the largest size conceivably possible to return from that query. For all rows. So in my case I had some VARCHAR2(4000) in there, and 50 threads (Statements) * 3 columns of varchar2's * 4000 was adding up to more than gigabytes of RAM with a setFetchSize of a few hundred [yikes]. There does not appear to be an option to say "don't pre allocate that array, just use the size as they come in." Ojdbc even keeps these preallocated buffers around between preparedstatements (cached/connection) so it can reuse them. Definitely a memory hog.

一种解决方法:使用setFetchSize达到合理的程度.默认值为10,这在高延迟的连接上可能会非常慢.概要文件,并且仅使用setFetchSize尽可能高的setFetchSize来真正提高速度.

One workaround: use setFetchSize to some sane amount. Default is 10 which can be quite slow on high latency connections. Profile and only use as high of setFetchSize as actually makes significant speed improvements.

另一种解决方法是确定最大实际列大小,然后将查询替换为(假设50是已知的最大实际大小)select substr(column_name, 0, 50)

Another workaround is to determine the maximum actual column size, then replace the query with (assuming 50 is the known max actual size) select substr(column_name, 0, 50)

您还可以执行其他操作:减少预取行数,增加java -Xmx参数,仅选择实际需要的列.

Other things you can do: decrease the number of prefetch rows, increase java -Xmx parameter, only select the columns you actually need.

一旦我们能够在所有查询上至少使用预取400(请确保进行分析以查看哪些数字对您有好处,并且具有高延迟,我们看到预取大小为3-4K的情况有所改善),性能得到了显着改善.

Once we were able to use at least prefetch 400 [make sure to profile to see what numbers are good for you, with high latency we saw improvements up to prefetch size 3-4K] on all queries, performance improved dramatically.

我想,如果您想真正地对付稀疏的真正长"的字样,遇到这些[稀有]大行时,您也许可以重新查询.

I suppose if you wanted to be really aggressive against sparse "really long" rows you might be able to re-query when you run into these [rare] large rows.

详情请参阅此处

这篇关于Oracle JDBC预取:如何避免用完RAM/如何使Oracle更快,高延迟的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆