“提交补偿失败".同时异步提交偏移量 [英] "Commit failed for offsets" while committing offset asynchronously

查看:284
本文介绍了“提交补偿失败".同时异步提交偏移量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个kafka使用者,我在其中使用特定主题的数据,并且看到以下异常.我正在使用0.10.0.0 kafka版本.

I have a kafka consumer from which I am consuming data from a particular topic and I am seeing below exception. I am using 0.10.0.0 kafka version.

LoggingCommitCallback.onComplete: Commit failed for offsets= {....}, eventType= some_type, time taken= 19ms, error= org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

我添加了这两个额外的消费者属性,但仍然没有帮助:

I added these two extra consumer properties but still it didn't helped:

session.timeout.ms=20000
max.poll.records=500

我正在不同的后台线程中提交偏移量,如下所示:

I am committing offsets in a different background thread as shown below:

kafkaConsumer.commitAsync(new LoggingCommitCallback(consumerType.name()));

该错误是什么意思,我该如何解决?我需要添加其他一些消费者属性吗?

What does that error mean and how can I resolve it? Do I need to add some other consumer properties?

推荐答案

是的,降低max.poll.records.您将获得较少的数据批处理,但是会产生更频繁的轮询调用,这将有助于保持会话状态.

Yes, lower max.poll.records. You'll get smaller batches of data but there more frequent calls to poll that will result will help keep the session alive.

这篇关于“提交补偿失败".同时异步提交偏移量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆