HSQLDB优化1.000.000批量插入 [英] HSQLDB optimize 1.000.000 bulk insert
问题描述
我需要尽快在Tomcat上的HSQLDB中插入1.000.000条目,但是64m(Tomcat上的默认MaxPermSize)不足以容纳此代码,并且出现"OutOfMemoryError"(我想在默认设置下插入).
I need to insert 1.000.000 entries in HSQLDB on Tomcat as fast as possible, but 64m (default MaxPermSize on Tomcat) not enough for this code, and I get "OutOfMemoryError" (I want to insert on default settings).
connection.setAutoCommit(false);
PreparedStatement preparedStatement = connection.prepareStatement("INSERT INTO USER (firstName, secondName) VALUES(?,?)");
for (int i = 0; i < 1000000; i++) {
preparedStatement.setString(1, "firstName");
preparedStatement.setString(2, "secondName");
preparedStatement.addBatch();
}
preparedStatement.executeBatch();
connection.commit();
我对此表示敬意: http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations .我设置了"SET FILES LOG FALSE",但没有帮助.
I reed this: http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations. I set "SET FILES LOG FALSE" but it doesn't help.
- 是否可以通过MaxPermSize = 64m插入1.000.000条目?
- 为什么在此代码上,tomcat吃了那么多内存?那里1.000.000 * 19("firstName"的长度+"secondName"的长度)* 2(1个符号上的字节)=〜40Mb.
- 为什么在内存表中插入比在缓存表中插入要快?我做错什么了吗?
推荐答案
- 也许可以尝试以较小的组合进行.它将消耗更少的内存,并且可能会更高效.
- 计算内存大小要困难得多.例如,您不会将firstName存储100万次,该值将被内部化,但是您将必须存储100万次引用.然后,您所有的库都会消耗内存,等等.
- 您怎么称呼缓存表"?
尝试一下,您将至少消耗更少的内存:
Try that, you will consume less memory at least :
connection.setAutoCommit(false);
PreparedStatement preparedStatement = connection.prepareStatement("INSERT INTO USER (firstName, secondName) VALUES(?,?)");
for (int i = 0; i < 1000000; i++) {
preparedStatement.setString(1, "firstName");
preparedStatement.setString(2, "secondName");
preparedStatement.addBatch();
if(i % 1000 == 0)
preparedStatement.executeBatch();
}
preparedStatement.executeBatch();
connection.commit();
您确定是因为烫发的大小吗?你可以放堆栈跟踪吗?
EDIT : Are you sure it is because of the perm size? Can you put the stack trace?
这篇关于HSQLDB优化1.000.000批量插入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!