Oracle JDBC预取:如何避免RAM耗尽 [英] Oracle JDBC prefetch: how to avoid running out of RAM

查看:338
本文介绍了Oracle JDBC预取:如何避免RAM耗尽的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用Oracle java JDBC(ojdbc14 10.2.x),加载具有多行的查询需要永远(高延迟环境。显然,Oracle JDBC中的默认预取是默认大小10,这需要每次往返一次10行。我试图设置一个积极的预取大小来避免这种情况。

Using Oracle java JDBC (ojdbc14 10.2.x), loading a query with many rows takes forever (high latency environment. This is apparently the default prefetch in Oracle JDBC is default size "10" which requires a round trip time once per 10 rows. I am attempting to set an aggressive prefetch size to avoid this.

 PreparedStatement stmt = conn.prepareStatement("select * from tablename");
 statement.setFetchSize(10000);
 ResultSet rs = statement.executeQuery();

这可以工作,但是我得到了一个内存不足的异常。我曾经假设setFetchSize会告诉它在进入时缓冲那么多行,使用每行所需的RAM如果我用50个线程运行,即使有16G的-XMX空间,它也会耗尽内存。感觉几乎就像是泄漏:

This can work, but instead I get an out of memory exception. I had presumed that setFetchSize would tell it to buffer "that many rows" as they come in, using as much RAM as each row requires. If I run with 50 threads, even with 16G of -XMX space, it runs out of memory. Feels almost like a leak:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.newArray(Native Method)
    at java.lang.reflect.Array.newInstance(Array.java:70)
    at oracle.jdbc.driver.BufferCache.get(BufferCache.java:226)
    at oracle.jdbc.driver.PhysicalConnection.getCharBuffer(PhysicalConnection.java:7422)
    at oracle.jdbc.driver.OracleStatement.prepareAccessors(OracleStatement.java:983)
    at oracle.jdbc.driver.T4CTTIdcb.receiveCommon(T4CTTIdcb.java:273)
    at oracle.jdbc.driver.T4CTTIdcb.receive(T4CTTIdcb.java:144)
    at oracle.jdbc.driver.T4C8Oall.readDCB(T4C8Oall.java:771)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:346)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:205)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:861)
    at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1145)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1267)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3449)
    at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3493)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
    ....

我还能做些什么来获取预取但不会耗尽RAM?发生了什么?

What can I do to still get prefetch but not run out of RAM? What is going on?

SO上最近的相关项目是: https ://stackoverflow.com/a/14317881/32453

The closest related item on SO is this: https://stackoverflow.com/a/14317881/32453

推荐答案

基本上,oracle的后续ojdbc jar的默认策略是为每个预取行预分配一个数组,该行可以容纳可能从该查询返回的最大大小。所以在我的情况下,我有一些VARCHAR2(4000),所以50个线程* 3列varchar2的* 4000加起来超过千兆字节的RAM [yikes]。似乎没有选项可以说不预先分配该数组,只需使用所需的大小。 Ojdbc甚至将这些预先分配的缓冲区保存在预备语句之间的中,以便它可以重用它们。肯定是记忆猪。

Basically, oracle's default strategy for later ojdbc jars is to "pre allocate" an array per "prefetch" row that accomodates for the largest size conceivably possible to return from that query. So in my case I had some VARCHAR2(4000) in there, so 50 threads * 3 columns of varchar2's * 4000 was adding up to more than gigabytes of RAM [yikes]. There does not appear to be an option to say "don't pre allocate that array, just use the size needed." Ojdbc even keeps these preallocated buffers around between preparedstatements so it can reuse them. Definitely a memory hog.

修复是确定最大实际列大小,然后用(假设50是最大大小)替换查询选择substr(column_name,0,50)以及配置文件,并且仅使用setFetchSize的高值,因为实际上显着提高了速度。

The fix was to determine the maximum actual column size, then replace the query with (assuming 50 is the max size) select substr(column_name, 0, 50) as well as profile and only use as high of setFetchSize as actually made significant speed improvements.

其他你可以做的事情:减少预取行的数量,增加Xmx参数,只选择你需要的列。

Other things you can do: decrease the number of prefetch rows, increase Xmx parameter, only select the columns you need.

一旦我们能够使用至少prefetch 400 [make一定要查看哪些数字对你有好处,高延迟我们在所有查询中都看到了预取大小3-4K的改进,性能得到了显着提升。

Once we were able to use at least prefetch 400 [make sure to profile to see what numbers are good for you, with high latency we saw improvements up to prefetch size 3-4K] on all queries, performance improved dramatically.

I假设你想要对稀疏的非常长的行进行积极的攻击,当你遇到这些[稀有]行时,你可以重新查询。

I suppose if you wanted to be really aggressive against sparse "really long" rows you might be able to re-query when you run into these [rare] rows.

详细信息广告nauseum 此处

Details ad nauseum here

这篇关于Oracle JDBC预取:如何避免RAM耗尽的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆