Kafka Mirror Maker:同步__consumer_offsets主题重复项 [英] Kafka Mirror Maker : Sync __consumer_offsets topic duplicates

查看:561
本文介绍了Kafka Mirror Maker:同步__consumer_offsets主题重复项的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

遵循此处提到的解决方案 kafka-mirror-maker -无法复制消费者抵消主题.我能够跨DC1(Live Kafka群集)和DC2(Backup Kafka群集)群集启动镜像制作器,而不会出现任何错误.

Following to the solution mentioned here kafka-mirror-maker-failing-to-replicate-consumer-offset-topic. I was able to start mirror maker without any error across DC1(Live Kafka cluster) and DC2(Backup Kafka cluster) clusters.

看起来它还能够跨DC1集群在DC2集群之间同步__consumer_offsets主题.

Looks like it is also able to sync __consumer_offsets topic across DC2 cluster form DC1 cluster.

问题

如果我关闭DC1的使用者并将相同的使用者(相同的group_id)指向DC2,即使镜像制造商能够同步该主题和分区的偏移量,它也会再次读取相同的消息.

If I close down consumer for DC1 and point same consumer(same group_id) to DC2 it reads the same messages again even though mirror maker is able sync offsets for this topic and partitions.

我可以看到LOG-END-OFFSET正确显示,但是CURRENT-OFFSET仍然指向导致LAG的旧问题.

I can see that LOG-END-OFFSET is showing correctly but CURRENT-OFFSET is still pointing to old causing LAG.

示例

  • Mirror Maker仍在DC2中运行.
  • 在使用者关闭DC1之前

  • Mirror Maker is still running in DC2.
  • Before consumer shut down in DC1

//DC1  __consumer_offsets topic
+-----------------------------------------------------------------+
| TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG  |
+-----------------------------------------------------------------+
| gs.suraj.test.1 0          10626           10626           0    |
| gs.suraj.test.1 2          10619           10619           0    |
| gs.suraj.test.1 1          10598           10598           0    |
+-----------------------------------------------------------------+

  • 在DC1中停止使用

  • Stop consumer in DC1

    使用者在DC2中启动之前

    Before consumer start up in DC2

    //DC2  __consumer_offsets topic
    +-----------------------------------------------------------------+
    | TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG  |
    +-----------------------------------------------------------------+
    | gs.suraj.test.1 0          9098            10614           1516 |
    | gs.suraj.test.1 2          9098            10614           1516 |
    | gs.suraj.test.1 1          9098            10615           1517 |
    +-----------------------------------------------------------------+
    

  • 由于这种滞后,当我再次在DC2中启动同一使用者时再次读取4549条消息时,应该不会发生这种情况,因为已经读取了DC1中的提交,并且镜像制造商已将__consumer_offsets主题从DC1同步到DC2

    Because of this lag, when I start same consumer in DC2 in reads 4549 messages again, which should not happen as it is already read an commited in DC1 and mirror maker have sync __consumer_offsets topic from DC1 to DC2

    如果我在这里遗漏任何东西,请告诉我.

    Please let me know if I am missing anything in here.

    推荐答案

    如果您使用的是Mirror Maker 2.0,他们会明确表示不支持一次创建

    If you are using Mirror Maker 2.0 they say explicitly on the motivation that there is no support for exactly-once:

    https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-Motivation

    但是他们打算将来做.

    这篇关于Kafka Mirror Maker:同步__consumer_offsets主题重复项的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆