Kafka 0.11.0.0 在重启时不断重置偏移量 [英] Kafka 0.11.0.0 keeps reseting offset on restart

查看:67
本文介绍了Kafka 0.11.0.0 在重启时不断重置偏移量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对 Kafka 0.11.0.0 有问题

I have a problem with Kafka 0.11.0.0

当我创建新主题时,将数据放入其中并使用 Java 消费者使用它,在重新启动 Kafka 后,我的消费者组的 0.11.0.0 偏移消失.主题保持不变,其中包含相同的数据,仅清除偏移量.这使得消费者再次从主题下载所有记录.奇怪的是,只有一个主题具有其旧的、正确的偏移量,所有其他偏移量都被删除,可能是因为该主题存在了一段时间.

When I create new topic, put come data into it and consume it with a java consumer, after restarting Kafka 0.11.0.0 offsets for my consumer group disappear. Topic stays and it has the same data in it, only offsets get purged. This makes consumer download all records from topics again. What is weird, only one topic have its old, correct offsets, all other offsets get deleted, maybe because that one topic was there for a while.

我使用 commitSync() 提交所有消耗的记录.然后将偏移量保存在我的代理上,我可以重新启动我的 Java 消费者,它从正确的偏移量开始,但是在重新启动整个 Kafka 后,消费者组的偏移量重置为 0.我在使用 kafka-consumer-groups.sh 重新启动后消费之前检查当前提交脚本,而且肯定是代理重置它们.

I commit all consumed records with commitSync(). Offset is then saved on my broker, I can restart my java consumer and it starts from correct offset but after restarting entire Kafka the offset for consumer groups resets to 0. I check current commits before consuming after restart with kafka-consumer-groups.sh script and definitely it's the broker who resets them.

我在 Kafka 0.10.2.1 中对此没有任何问题.我仅在 0.11.0.0 版本中遇到此问题.

I had no problem with this in Kafka 0.10.2.1. I experience this problem only in 0.11.0.0 version.

我的消费者将 auto.offset.reset 设置为最早,自动提交设置为 false,因为我是手动提交的.Kafka 数据存储在具有必要权限的非 tmp 目录中.其余的代理配置是默认的.

My consumer has auto.offset.reset set to earliest, auto commit is set to false because I'm committing manually. Kafka data is stored in non-tmp directory with necessary permissions. The rest of broker configuration is default.

我需要 0.11.0.0 版本进行交易.我不知道问题出在哪里.这可能是什么原因?是否有我在某处遗漏的新配置参数?

I need 0.11.0.0 version for transactions. I have no idea where the problem can be. What can be a cause for this? Is there new config parameter I missed somewhere?

@编辑保留的主题也存在偏移问题,但它并没有完全清除,但重新启动后的偏移不正确,每次中断重启后,消费者都会获得大约 15% 的数据.

@Edit That topic which stays also has problems with offsets, however it doesn't get entirely purged but the offset after restarting isn't correct and consumer gets around ~15% of its data after every broken restart.

@Edit2有时但并非总是我的 server.log 充满:

@Edit2 Sometimes but not always my server.log is full of:

WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:4, offset:1669}, Current: {epoch:5, offset1540} for Partition: __consumer_offsets-26 (kafka.server.epoch.LeaderEpochFileCache)

好像是因为另一个日志连接到了我的消费群:

It seems like it's connected to my consumer group because of another logs:

[2017-08-22 08:59:30,719] INFO [GroupCoordinator 0]: Preparing to rebalance group scrapperBackup with old generation 119 (__consumer_offsets-26) (kafka.coordinator.group.GroupCoordinator)
[2017-08-22 08:59:30,720] INFO [GroupCoordinator 0]: Group scrapperBackup with generation 120 is now empty (__consumer_offsets-26) (kafka.coordinator.group.GroupCoordinator)

重启时总是有这样的日志:

There are always logs like this one on restart:

[2017-08-22 09:15:37,948] INFO Partition [__consumer_offsets,26] on broker 0: __consumer_offsets-26 starts at Leader Epoch 6 from offset 1699. Previous Leader Epoch was: 5 (kafka.cluster.Partition)

@Edit3为 Kafka/Zookeeper 数据创建新目录并从头开始创建所有内容都有帮助.我不知道是什么问题,但它现在可以正常工作.应用程序的数据目录中似乎发生了一些错误.

@Edit3 Creating new directory for Kafka/Zookeeper data and creating everything from scratch helped. I don't know what was the problem, but it works now properly. It seems that some error occurred in data directories of apps.

推荐答案

如果您遇到此问题,请下载 Kafka 新版本 0.11.0.1.此问题已在该版本中修复.

If you experience this problem, download new version 0.11.0.1 of Kafka. This problem was fixed in that version.

这解释了这个错误:https://issues.apache.org/jira/浏览/KAFKA-5600

这篇关于Kafka 0.11.0.0 在重启时不断重置偏移量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆