什么是最大JDBC批处理大小? [英] What is the max JDBC batch size?

查看:500
本文介绍了什么是最大JDBC批处理大小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个列表,该列表不断增加。我做添加批处理取决于列表大小。我忘了把限制做executeBatch在指定的大小。

I have a list and that list increasing continuously. I am doing add batch depend on the list size. I forgot to put limit for do executeBatch in specified size.

计划正在工作数小时。我现在不想停止,修复并重新开始。

Program is working for hours. I dont want to stop, fix and start again for now.

我的问题,什么决定添加批次的大小?一次执行 executeBatch()的最大容量是多少?我可以多少次使用 addBatch 没有 executeBatch()

My questions, what decides size of the adding batch? What is the max capacity of the batch to do executeBatch() in a one time? How many time I can use addBatch without do executeBatch()?

推荐答案

PgJDBC对批次有一些限制:

PgJDBC has some limitations regarding batches:


  • 和所有结果,必须积累在内存中。这包括大的blob / clob结果。因此,自由记忆是批量大小的主要限制因素。

  • All request values, and all results, must be accumulated in memory. This includes large blob/clob results. So free memory is the main limiting factor for batch size.

直到PgJDBC 9.4(尚未发布)返回生成的键的批处理总是对每个条目执行一次往返

Until PgJDBC 9.4 (not yet released), batches that return generated keys always do a round trip for every entry, so they're no better than individual statement executions.

即使在9.4中,返回生成的密钥的批次只会在生成的值是大小限制。 单个文本 bytea 或不受约束的 varchar 字段将强制驾驶员为每次执行做一次往返

Even in 9.4, batches that return generated keys only offer a benefit if the generated values are size limited. A single text, bytea or unconstrained varchar field in the requested result will force the driver to do a round trip for every execution.

批处理的好处是减少了网络往返。因此,如果您的数据库是本地到您的应用程序服务器,则少得多。随着批量大小的增加,收益递减,因为网络等待的总时间快速下降,因此通常不强调尝试尽可能大批量生产。

The benefit of batching is a reduction in network round trips. So there's much less point if your DB is local to your app server. There's a diminishing return with increasing batch size, because the total time taken in network waits falls off quickly, so it's often not work stressing about trying to make batches as big as possible.

如果您要批量加载数据,请认真考虑使用 COPY API,通过PgJDBC的 CopyManager 通过 PgConnection 接口。它允许您将类似于CSV的数据流式传输到服务器,以便快速批量加载,只需很少的客户端/服务器往返。不幸的是,它的文档明显不足 - 它不出现在主要的PgJDBC文档中,仅在API文档中。

If you're bulk-loading data, seriously consider using the COPY API instead, via PgJDBC's CopyManager, obtained via the PgConnection interface. It lets you stream CSV-like data to the server for rapid bulk-loading with very few client/server round trips. Unfortunately, it's remarkably under-documented - it doesn't appear in the main PgJDBC docs at all, only in the API docs.

这篇关于什么是最大JDBC批处理大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆