JDBC 接收器配置选项 batch.size [英] JDBC Sink Configuration Options batch.size
问题描述
指定在可能的情况下尝试将多少条记录一起插入到目标表中.类型:int默认值:3000有效值:[0,…]重要性:中等
<块引用>
所以,这是来自 Confluent 网站.
Importance is medium, default is 3000. 如果我想要 KAFKA 怎么办即使有说,也每 30 秒更改一次,只有 27 条 KAFKA 消息为主题?什么是处理发生的默认设置每个经过的时间?我们都知道这是满足的,因为我们可以运行许多示例,只需将 1 条记录从 mySQL 传递到 SQLServer,但我找不到基于时间的处理的参数值.我可以影响它吗?
https://github.com/confluentinc/kafka-connect-jdbc/issues/290 也注意到了这一点.那里有一些有趣的东西.
我认为你应该把注意力集中在尽可能"这两个字上
consumer.max.poll.records
总是会从 Kafka 中抓取那么多记录.轮询完成后,JDBC 接收器将根据需要构建尽可能多的批次,直到在 consumer.max.poll.interval.ms
Specifies how many records to attempt to batch together for insertion into the destination table, when possible.
Type: int
Default: 3000
Valid Values: [0,…]
Importance: medium
So, this is from Confluent site.
Importance is medium, default is 3000. What if I want the KAFKA changes every 30 secs even if there are say, only 27 KAFKA messages for the topic? What is default setting in which processing occurs on a per elapsed time basis? We all know this is catered for as we can run many examples just passing 1 records from, say mySQL to SQLServer, but I cannot find the parameter value for time based processing. Can I influence it?
https://github.com/confluentinc/kafka-connect-jdbc/issues/290 Noted this as well. Some interesting stuff there.
I think you should focus on the words "when possible"
consumer.max.poll.records
will always grab up to that many records from Kafka. Once a poll is complete, the JDBC sink will build as many batches as needed until the next consumer poll is called within consumer.max.poll.interval.ms
这篇关于JDBC 接收器配置选项 batch.size的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!