在Farn on Yarn上与Kafka并行 [英] Flink on Yarn, parallel source with Kafka
问题描述
我试图在我的Flink工作中与我的Kafka源代码保持并行,但是到目前为止我还是失败了.
I am trying to have parallelism with my Kafka source within my Flink job, but I failed so far.
我为Kafka生产者设置了4个分区:
I set 4 partitions to my Kafka producer :
$ ./bin/kafka-topics.sh --describe --zookeeper X.X.X.X:2181 --topic mytopic
Topic:mytopic PartitionCount:4 ReplicationFactor:1 Configs:
Topic: mytopic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: mytopic Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: mytopic Partition: 2 Leader: 0 Replicas: 0 Isr: 0
Topic: mytopic Partition: 3 Leader: 0 Replicas: 0 Isr: 0
我的scala代码如下:
My scala code is as follow :
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
env.getConfig.setGlobalJobParameters(params)
// **** Kafka CONNECTION ****
val properties = new Properties();
properties.setProperty("bootstrap.servers", params.get("server"));
properties.setProperty("group.id", "test");
// **** Get KAFKA source ****
val stream: DataStream[String] = env.addSource(new FlinkKafkaConsumer010[String](params.get("topic"), new SimpleStringSchema(), properties))
我在YARN上工作:
$ ./bin/flink run -m yarn-cluster -yn 4 -yjm 8192 -ynm test -ys 1 -ytm 8192 myjar.jar --server X.X.X.X:9092 --topic mytopic
我尝试了很多事情,但是我的资源没有并行化:
I tried a bunch of things, but my source is not parallelized :
有几个Kafka分区,至少有那么多插槽/任务管理器应该这样做,对吧?
Having several Kafka partitions and at least as much slot / Task Managers should do it, right?
推荐答案
我遇到了同样的问题.我建议您检查两件事.
I had the same issue. I would suggest you to check two things.
- 在实施生产者时,请检查是否为刷新到Kafka的每个记录都产生相同的键.(您应该拥有分布合理的密钥,或者只是将其设置为null).
producer.send(new ProducerRecord< String,String>("topicName","yourKey","yourMessage")
到
producer.send(new ProducerRecord< String,String>("topicName",null,"yourMessage")
- 检查您的Kafka生产者库版本是否与Kafka消费者库版本兼容.
这篇关于在Farn on Yarn上与Kafka并行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!