org.apache.kafka.common.errors.RecordTooLargeException-删除大小超过最大限制的消息,并推入另一个kafka主题 [英] org.apache.kafka.common.errors.RecordTooLargeException - Droping message with size more than max limit and pushing into another kafka topic

查看:496
本文介绍了org.apache.kafka.common.errors.RecordTooLargeException-删除大小超过最大限制的消息,并推入另一个kafka主题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

org.apache.kafka.common.errors.RecordTooLargeException :[Partition = Offset]中有一些消息:{binlog-0 = 170421},其大小大于获取大小1048576,并且因此无法返回.

org.apache.kafka.common.errors.RecordTooLargeException: There are some messages at [Partition=Offset]: {binlog-0=170421} whose size is larger than the fetch size 1048576 and hence cannot be returned.

我遇到了以上异常,我的Apache Beam数据管道失败了.我希望kafka阅读器忽略大小超过默认大小&的邮件.也许将其推入另一个主题以进行记录.

Hi, I'm getting the above exception and my apache beam data pipeline fails. I want the kafka reader to ignore message with size more than default size & maybe push it into another topic for logging purposes.

Properties kafkaProps = new Properties();
kafkaProps.setProperty("errors.tolerance", "all");
kafkaProps.setProperty("errors.deadletterqueue.topic.name", "binlogfail");
kafkaProps.setProperty("errors.deadletterqueue.topic.replication.factor", "1");

尝试使用上述方法,但仍然面临记录异常大的情况.

Tried using the above but still facing record too large exception.

Kafka Connect接收器任务会忽略公差限制

此链接表示上述属性只能在转换或序列化期间使用.

This link says that the above properties can be used only during conversion or serialization.

有什么方法可以解决我面临的问题.任何帮助将不胜感激.

Is there some way to solve the problem that I'm facing. Any help would be appreciated.

推荐答案

我希望kafka阅读器忽略大小超过默认大小的邮件

I want the kafka reader to ignore message with size more than default size

使用Beam,我不确定您是否可以捕获该错误并跳过该错误.您必须转到原始的Kafka Consumer/Producer实例来处理该try-catch逻辑

With Beam, I'm not sure you can capture that error and skip it. You would have to go to the raw Kafka Consumer/Producer instances to handle that try-catch logic

&也许将其推入另一个主题以进行记录.

& maybe push it into another topic for logging purposes.

如果不更改代理设置以首先允许较大的消息,然后然后更改客户端属性,则无法实现.

That isn't possible without changing the broker settings to first allow larger messages, and then changing your client properties.

errors.*属性适用于Kafka Connect API,不适用于消费者/生产者(例如Beam)

errors.* properties are for Kafka Connect APIs, not Consumer/Producer (such as Beam)

相关-如何通过以下方式发送大型邮件卡夫卡(超过15MB)?

这篇关于org.apache.kafka.common.errors.RecordTooLargeException-删除大小超过最大限制的消息,并推入另一个kafka主题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆