无法描述 Kafka Streams Consumer Group [英] Unable to describe Kafka Streams Consumer Group

查看:25
本文介绍了无法描述 Kafka Streams Consumer Group的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想要实现的是确保我的 Kafka 流消费者没有延迟.

What I want to achieve is to be sure that my Kafka streams consumer does not have lag.

我有一个简单的 Kafka 流应用程序,它以 GlobalKTable 的形式将一个主题具体化为存储.

I have simple Kafka streams application that materialized one topic as store in form of GlobalKTable.

当我尝试通过命令在 Kafka 上描述消费者时:

When I try to describe consumer on Kafka by command:

kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group my-application-id

我看不到任何结果.而且也没有错误.当我通过以下方式列出所有消费者时:

I can't see any results. And there is no error either. When I list all consumers by:

kafka-consumer-groups --bootstrap-server localhost:9092 --describe --all-groups

我的应用程序使用者已正确列出.

my application consumer is listed correctly.

知道在哪里可以找到我无法描述消费者的其他信息吗?(任何其他写入主题的 Kafka 流消费者都可以正确描述.)

Any idea where to find additional information what is happening that I can't describe consumer? (Any other Kafka streams consumers that write to topics can be described correctly.)

推荐答案

如果您的应用程序将一个主题具体化到一个 GlobalKTable 中,则不会形成消费者组.在内部,全局消费者"不使用 subscribe() 而是使用 assign() 并且没有配置消费者 group.id(如您可以从日志中验证)并且没有提交偏移量.

If your application does only materialize a topic into a GlobalKTable no consumer group is formed. Internally, the "global consumer" does not use subscribe() but assign() and there is no consumer group.id configured (as you can verify from the logs) and no offset are committed.

原因是,所有应用程序实例都需要消耗所有主题分区(即广播模式).但是,消费者组被设计为不同实例读取同一主题的不同分区.此外,对于每个消费者组,每个分区只能提交一个偏移量——但是,如果多个实例读取同一分区并使用相同的 group.id 提交偏移量,则提交将相互覆盖.

The reason is, that all application instances need to consume all topic partitions (ie, broadcast pattern). However, a consumer group is designed such that different instances read different partitions for the same topic. Also, per consumer group, only one offset can be committed per partition -- however, if multiple instance read the same partition and would commit offsets using the same group.id the commits would overwrite each other.

因此,在广播"数据时使用消费者组不起作用.

Hence, using a consumer group while "broadcasting" data does not work.

但是,所有消费者都应该公开滞后"指标 records-lag-maxrecords-lag(参见 https://kafka.apache.org/documentation/#consumer_fetch_monitoring).因此,您应该能够通过 JMX 连接来监控延迟.Kafka Streams 也通过 KafkaStreams#metrics() 包含客户端指标.

However, all consumers should expose a "lag" metrics records-lag-max and records-lag (cf https://kafka.apache.org/documentation/#consumer_fetch_monitoring). Hence, you should be able to hook in via JMX to monitor the lag. Kafka Streams includes client metrics via KafkaStreams#metrics(), too.

这篇关于无法描述 Kafka Streams Consumer Group的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆