org.apache.kafka.common.errors.RecordTooLargeException - 丢弃大小超过最大限制的消息并推送到另一个 kafka 主题 [英] org.apache.kafka.common.errors.RecordTooLargeException - Droping message with size more than max limit and pushing into another kafka topic

查看:18
本文介绍了org.apache.kafka.common.errors.RecordTooLargeException - 丢弃大小超过最大限制的消息并推送到另一个 kafka 主题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

org.apache.kafka.common.errors.RecordTooLargeException:[Partition=Offset] 处有一些消息:{binlog-0=170421} 其大小大于获取大小 1048576 和因此无法退货.

org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {binlog-0=170421} whose size is larger than the fetch size 1048576 and hence cannot be returned.

我遇到了上述异常并且我的 apache 光束数据管道失败了.我希望 kafka 阅读器忽略大小大于默认大小的消息 &可能会将其推送到另一个主题以用于记录目的.

Hi, I'm getting the above exception and my apache beam data pipeline fails. I want the kafka reader to ignore message with size more than default size & maybe push it into another topic for logging purposes.

Properties kafkaProps = new Properties();
kafkaProps.setProperty("errors.tolerance", "all");
kafkaProps.setProperty("errors.deadletterqueue.topic.name", "binlogfail");
kafkaProps.setProperty("errors.deadletterqueue.topic.replication.factor", "1");

尝试使用上述方法但仍面临记录过大异常.

Tried using the above but still facing record too large exception.

Kafka Connect 接收器任务忽略容限

此链接表示上述属性只能在转换或序列化期间使用.

This link says that the above properties can be used only during conversion or serialization.

有什么办法可以解决我面临的问题.任何帮助将不胜感激.

Is there some way to solve the problem that I'm facing. Any help would be appreciated.

推荐答案

我希望 kafka 阅读器忽略大小超过默认大小的消息

I want the kafka reader to ignore message with size more than default size

使用 Beam,我不确定您是否可以捕获该错误并跳过它.你必须去原始的 Kafka Consumer/Producer 实例来处理 try-catch 逻辑

With Beam, I'm not sure you can capture that error and skip it. You would have to go to the raw Kafka Consumer/Producer instances to handle that try-catch logic

&可能会将其推送到另一个主题以用于记录目的.

& maybe push it into another topic for logging purposes.

如果不更改代理设置以首先允许更大的消息,然后然后更改您的客户端属性,这是不可能的.

That isn't possible without changing the broker settings to first allow larger messages, and then changing your client properties.

errors.* 属性用于 Kafka Connect API,而不是消费者/生产者(例如 Beam)

errors.* properties are for Kafka Connect APIs, not Consumer/Producer (such as Beam)

相关 - 如何发送大消息卡夫卡(超过 15MB)?

这篇关于org.apache.kafka.common.errors.RecordTooLargeException - 丢弃大小超过最大限制的消息并推送到另一个 kafka 主题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆