Kafka 0.11.0.0在重启时保持重置偏移 [英] Kafka 0.11.0.0 keeps reseting offset on restart

查看:86
本文介绍了Kafka 0.11.0.0在重启时保持重置偏移的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对Kafka 0.11.0.0有问题

I have a problem with Kafka 0.11.0.0

当我创建新主题时,将数据放入其中并用Java使用者使用,重新启动Kafka 0.11.0.0后,我的使用者组的偏移会消失.主题保持不变,并且其中包含相同的数据,仅清除了偏移量.这使消费者可以再次从主题下载所有记录.奇怪的是,只有一个主题具有正确的旧偏移,所有其他偏移都被删除了,可能是因为一个主题存在了一段时间.

When I create new topic, put come data into it and consume it with a java consumer, after restarting Kafka 0.11.0.0 offsets for my consumer group disappear. Topic stays and it has the same data in it, only offsets get purged. This makes consumer download all records from topics again. What is weird, only one topic have its old, correct offsets, all other offsets get deleted, maybe because that one topic was there for a while.

我使用commitSync()提交所有消耗的记录.然后将偏移量保存在我的代理上,我可以重新启动Java使用者,并且它从正确的偏移量开始,但是在重新启动整个Kafka之后,使用者组的偏移量将重置为0.在使用kafka-consumer-groups.sh重新启动后,我在使用之前检查当前提交脚本,并且肯定是重置它们的经纪人.

I commit all consumed records with commitSync(). Offset is then saved on my broker, I can restart my java consumer and it starts from correct offset but after restarting entire Kafka the offset for consumer groups resets to 0. I check current commits before consuming after restart with kafka-consumer-groups.sh script and definitely it's the broker who resets them.

在Kafka 0.10.2.1中,我对此没有任何问题.我仅在0.11.0.0版本中遇到此问题.

I had no problem with this in Kafka 0.10.2.1. I experience this problem only in 0.11.0.0 version.

我的使用者将auto.offset.reset最早设置为,自动提交设置为false,因为我是手动提交.Kafka数据具有必要的权限存储在非tmp目录中.其余的代理配置是默认的.

My consumer has auto.offset.reset set to earliest, auto commit is set to false because I'm committing manually. Kafka data is stored in non-tmp directory with necessary permissions. The rest of broker configuration is default.

我需要0.11.0.0版本的交易.我不知道问题出在哪里.这可能是什么原因?我在某个地方错过了新的配置参数吗?

I need 0.11.0.0 version for transactions. I have no idea where the problem can be. What can be a cause for this? Is there new config parameter I missed somewhere?

@编辑仍然存在的那个主题也存在偏移量问题,但是它没有被完全清除,但是重启后的偏移量是不正确的,并且每次中断后重新启动后,消费者都能获得大约15%的数据.

@Edit That topic which stays also has problems with offsets, however it doesn't get entirely purged but the offset after restarting isn't correct and consumer gets around ~15% of its data after every broken restart.

@ Edit2有时但并非总是如此,我的server.log充满了:

@Edit2 Sometimes but not always my server.log is full of:

WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:4, offset:1669}, Current: {epoch:5, offset1540} for Partition: __consumer_offsets-26 (kafka.server.epoch.LeaderEpochFileCache)

由于另一个日志,它似乎已连接到我的消费者组:

It seems like it's connected to my consumer group because of another logs:

[2017-08-22 08:59:30,719] INFO [GroupCoordinator 0]: Preparing to rebalance group scrapperBackup with old generation 119 (__consumer_offsets-26) (kafka.coordinator.group.GroupCoordinator)
[2017-08-22 08:59:30,720] INFO [GroupCoordinator 0]: Group scrapperBackup with generation 120 is now empty (__consumer_offsets-26) (kafka.coordinator.group.GroupCoordinator)

在重启时总会有这样的日志:

There are always logs like this one on restart:

[2017-08-22 09:15:37,948] INFO Partition [__consumer_offsets,26] on broker 0: __consumer_offsets-26 starts at Leader Epoch 6 from offset 1699. Previous Leader Epoch was: 5 (kafka.cluster.Partition)

@ Edit3为Kafka/Zookeeper数据创建新目录,并从头开始创建所有内容都很有帮助.我不知道问题出在哪里,但是现在可以正常使用了.似乎在应用程序的数据目录中发生了一些错误.

@Edit3 Creating new directory for Kafka/Zookeeper data and creating everything from scratch helped. I don't know what was the problem, but it works now properly. It seems that some error occurred in data directories of apps.

推荐答案

如果遇到此问题,请下载Kafka的新版本0.11.0.1.该问题已在该版本中得到解决.

If you experience this problem, download new version 0.11.0.1 of Kafka. This problem was fixed in that version.

这解释了此错误: https://issues.apache.org/jira/浏览/KAFKA-5600

这篇关于Kafka 0.11.0.0在重启时保持重置偏移的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆