Debezium刷新超时和MySQL的OutOfMemoryError错误 [英] Debezium flush timeout and OutOfMemoryError errors with MySQL

查看:491
本文介绍了Debezium刷新超时和MySQL的OutOfMemoryError错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用Debezium 0.7从MySQL读取数据,但在初始快照阶段出现刷新超时和OutOfMemoryError错误.查看下面的日志,似乎连接器试图一次写太多消息:

Using Debezium 0.7 to read from MySQL but getting flush timeout and OutOfMemoryError errors in the initial snapshot phase. Looking at the logs below it seems like the connector is trying to write too many messages in one go:

WorkerSourceTask{id=accounts-connector-0} flushing 143706 outstanding messages for offset commit   [org.apache.kafka.connect.runtime.WorkerSourceTask]
WorkerSourceTask{id=accounts-connector-0} Committing offsets   [org.apache.kafka.connect.runtime.WorkerSourceTask]
Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space
WorkerSourceTask{id=accounts-connector-0} Failed to flush, timed out while waiting for producer to flush outstanding 143706 messages   [org.apache.kafka.connect.runtime.WorkerSourceTask]

想知道正确的设置是什么 http://debezium.io/docs/connector/mysql/#connector-properties 用于大型数据库(> 50GB).对于较小的数据库,我没有这个问题.简单地增加超时时间似乎不是一个好的策略.我目前正在使用默认的连接器设置.

Wonder what the correct settings are http://debezium.io/docs/connectors/mysql/#connector-properties for sizeable databases (>50GB). I didn't have this issue with smaller databases. Simply increasing the timeout doesn't seem like a good strategy. I'm currently using the default connector settings.

按以下建议更改设置,并解决了该问题:

Changed the settings as suggested below and it fixed the problem:

OFFSET_FLUSH_TIMEOUT_MS: 60000  # default 5000
OFFSET_FLUSH_INTERVAL_MS: 15000  # default 60000
MAX_BATCH_SIZE: 32768  # default 2048
MAX_QUEUE_SIZE: 131072  # default 8192
HEAP_OPTS: '-Xms2g -Xmx2g'  # default '-Xms1g -Xmx1g'

推荐答案

这是一个非常复杂的问题-首先,Debezium Docker映像的默认内存设置非常低,因此,如果使用它们,可能需要增加它们.

This is a very complex question - first of all, the default memory settings for Debezium Docker images are quite low so if you are using them it might be necessary to increase them.

接下来,有多个因素在起作用.我建议执行以下步骤.

Next, there are multiple factors at play. I recommend to do follwoing steps.

  1. 增加max.batch.sizemax.queue.size-减少提交次数
  2. 增加offset.flush.timeout.ms-提供连接时间来处理累积的记录
  3. 减少offset.flush.interval.ms-应减少累计偏移量
  1. Increase max.batch.size and max.queue.size - reduces number of commits
  2. Increase offset.flush.timeout.ms - gives Connect time to process accumulated records
  3. Decrease offset.flush.interval.ms - should reduce the amount of accumulated offsets

不幸的是,在后台潜伏着一个 issue KAFKA-6551 肆虐.

Unfortunately there is an issue KAFKA-6551 lurking in backstage that can still play a havoc.

这篇关于Debezium刷新超时和MySQL的OutOfMemoryError错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆