Teradata-jdbc:如果Java有内存限制,使用Fastload有什么意义? [英] Teradata-jdbc: What's the point of using Fastload if java has memory limitations?

查看:194
本文介绍了Teradata-jdbc:如果Java有内存限制,使用Fastload有什么意义?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是来自teradata网站的示例jdbc Fastload程序的链接:

Here is the link to a sample jdbc Fastload program from the teradata website : http://developer.teradata.com/doc/connectivity/jdbc/reference/current/samp/T20205JD.java.txt

它仅插入一行,因此我通过替换以下代码将其修改为插入50万行:

It inserts only one row so I modified it to insert 500K rows by replacing the following code :

                        pstmt.setInt(1, 1);
                        pstmt.setString(2, strBuf);
                        pstmt.addBatch();
                        batchCount++;

使用:

                        for (int i = 0; i < 500000 ; i ++ ) {
                        pstmt.setInt(1, i);
                        pstmt.setString(2, strBuf);
                        pstmt.addBatch();
                        batchCount++;
                        }

它当然会失败,因为Java内存不足.

It of course failed because java was out of memory.

因此,Fastloads jdbc无法上传甚至500K行的非常简单的数据. .因为方法addBatch()有时会抛出outOfMemory异常.

So Fastloads jdbc fails to upload EVEN 500K rows of very simple data . . because the method addBatch() throws outOfMemory exception at some point.

但是我读到Fastload能够上传数百万行! ! !但是,我在任何地方都找不到任何真实的例子.如何克服outOfMemory java异常?

But I read that Fastload was able to upload millions of rows ! ! ! However I could not find any real example anywhere . How to overcome outOfMemory java exception ?

任何人都可以用jdbcFastload(不是FastloadCSV!)显示示例来上传100万行吗?

Can anybody show an example with jdbc and Fastload (NOT FastloadCSV!) to upload let's say 1M rows ?

PS:

1)xmx增加堆空间无法达到目的,因为每增加一个addBatch()方法执行速度就会变慢,并且额外的堆具有局限性(通常为4 g)

1) xmx increase of heap space defeats the purpose, because every additional addBatch() methods executes slower, and additional heap has limitations ( usually 4 g )

2)我不需要FastloadCSV,因为它在ttu 14之前不支持文本限定符,并且还有其他问题

2) I do not need FastloadCSV , because it does not support text qualifiers until ttu 14 and has other issues

推荐答案

您必须setAutoCommit(false)然后多次executeBatch,例如每隔50,00或100,000个addBatch之后,则在内存用尽之前.终于你commit.

You must setAutoCommit(false) and then simply executeBatch multiple times, e.g. after every 50,00 or 100,000 addBatch, before you run out of memory. Finally you commit.

请参见加快JDBC/ODBC应用程序的速度在developer.teradata.com上

See Speed up your JDBC/ODBC applications on developer.teradata.com

这篇关于Teradata-jdbc:如果Java有内存限制,使用Fastload有什么意义?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆