卡夫卡消费者,很长的平衡期 [英] Kafka consumer, very long rebalances

查看:42
本文介绍了卡夫卡消费者,很长的平衡期的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在运行3个经纪人Kafka 0.10.0.1集群.我们有一个Java应用程序,它产生了许多来自不同主题的使用者线程.对于每个主题,我们都指定了不同的消费者组.

We are running a 3 broker Kafka 0.10.0.1 cluster. We have a java app which spawns many consumer threads consuming from different topics. For every topic we have specified different consumer-group.

很多时候,我发现只要重新启动此应用程序,一个或多个CG就会花费5分钟以上的时间来接收分区分配.到那时为止,该主题的消费者什么也没消费.如果我去Kafka经纪人并运行consumer-groups.sh并描述该特定CG,我会看到它正在重新平衡.在server.log中,我看到了这样的行

A lot of times I see that whenever this application is restarted one or more CGs take more than 5 minutes to receive partition assignment. Till that time consumers for that topic don't consume anything. If I go to Kafka broker and run consumer-groups.sh and describe that particular CG I see that it is rebalancing. In server.log I see such lines

准备稳定otp-sms-consumer组稳定组otp-sms-consumer

Preparing to stabilize group otp-sms-consumer Stabilized group otp-sms-consumer

在这两个日志之间通常存在约5分钟或更长时间的间隔.在使用者方面,当我打开跟踪级别的日志时,在此暂停时间内实际上没有任何活动.几分钟后,开始了许多活动.像otp-sms这样的主题中存储着时间紧迫的数据,我们不能忍受这么长的延迟.如此长时间的重新平衡可能是什么原因.

And between these two logs there is usually a gap of about 5 minutes or more. On consumer side when I turn trace level logs, there is literally no activity during this pause time. After a couple of minutes a lot of activity starts. There is time critical data stored in that topic like otp-sms and we cannot tolerate such long delays. What can be the reason for such long rebalances.

这是我们的消费者配置

auto.commit.interval.ms = 3000
auto.offset.reset = latest
bootstrap.servers = [x.x.x.x:9092, x.x.x.x:9092, x.x.x.x:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = otp-notifications-consumer
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 50
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
session.timeout.ms = 300000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /x/x/client.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

请帮助.

推荐答案

我怀疑您的集群版本至少为0.10.1.0,因为在使用者配置中看到 max.poll.interval.ms 版本中引入.

I suspect your cluster version is at least 0.10.1.0 as I see max.poll.interval.ms in your consumer configuration which was introduced in this version.

Kafka 0.10.1.0集成了

Kafka 0.10.1.0 integrates KIP-62 which introduces a rebalance timeout set to max.poll.interval.ms and its default value is 5 minutes.

我猜想如果您不想在重新平衡期间等待超时到期,那么您的消费者需要通过调用 close()方法来干净地离开消费者组.

I guess if you don't want to wait timeout expiration during a rebalance, your consumers need to cleanly leave consumer group by calling close() method.

这篇关于卡夫卡消费者,很长的平衡期的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆