是否可以在Kafka 0.8.2中为现有主题添加分区 [英] Is it possible to add partitions to an existing topic in Kafka 0.8.2

查看:136
本文介绍了是否可以在Kafka 0.8.2中为现有主题添加分区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个运行2个分区的Kafka集群。我一直在寻找一种方法将分区数增加到3.但是,我不想丢失主题中的现有消息。我尝试停止Kafka,修改 server.properties 文件以将分区数增加到3并重新启动Kafka。但是,这似乎没有任何改变。使用Kafka ConsumerOffsetChecker ,我仍然看到它只使用了2个分区。我使用的Kafka版本是0.8.2.2。在0.8.1版本中,曾经有一个名为 kafka-add-partitions.sh 的脚本,我想这可能会成功。但是,我在0.8.2中没有看到任何这样的脚本。有没有办法实现这个?我确实尝试创建一个全新的主题,对于那个,它似乎根据 server.properties 文件中的更改使用了3个分区。但是,对于现有主题,它似乎并不关心。

I have a Kafka cluster running with 2 partitions. I was looking for a way to increase the partition count to 3. However, I don't want to lose existing messages in the topic. I tried stopping Kafka, modifying the server.properties file to increase the number of partitions to 3 and restart Kafka. However, that does not seem to change anything. Using Kafka ConsumerOffsetChecker, I still see it is using only 2 partitions. The Kafka version I am using is 0.8.2.2. In version 0.8.1, there used to be a script called kafka-add-partitions.sh, which I guess might do the trick. However, I don't see any such script in 0.8.2. Is there any way of accomplishing this? I did experiment with creating a whole new topic and for that one it does seem to use 3 partitions as per the change in the server.properties file. However, for existing topics, it doesn't seem to care.

推荐答案

看起来你可以使用这个脚本改为:

Looks like you can use this script instead:

bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter --topic my_topic_name 
   --partitions 40 

在代码中看起来他们做同样的事情:

In the code it looks like they do same thing:

 AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(topic, partitionReplicaList, zkClient, true)

kafka-topics.sh 执行这段代码以及< kafka使用的href =https://github.com/apache/kafka/blob/0.8/core/src/main/scala/kafka/admin/AddPartitionsCommand.scala\"rel =noreferrer> AddPartitionsCommand -add-partition script。

kafka-topics.sh executes this piece of code as well as AddPartitionsCommand used by kafka-add-partition script.

但是你必须知道重新分区当使用密钥时:

However you have to be aware of re-partitioning when using key:


请注意,分区的一个用例是对
数据进行语义分区,并且添加分区不会更改现有数据的分区,因此如果他们依赖于
分区,这可能会打扰消费者。也就是说,如果数据按 hash(key)%number_of_partitions 进行分区,那么这个分区可能会通过添加分区进行洗牌,但是Kafka不会尝试
以任何方式自动重新分配数据。

Be aware that one use case for partitions is to semantically partition data, and adding partitions doesn't change the partitioning of existing data so this may disturb consumers if they rely on that partition. That is if data is partitioned by hash(key) % number_of_partitions then this partitioning will potentially be shuffled by adding partitions but Kafka will not attempt to automatically redistribute data in any way.

这篇关于是否可以在Kafka 0.8.2中为现有主题添加分区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆