2个具有相同消费者组ID的火花流作业 [英] 2 spark stream job with same consumer group id

查看:55
本文介绍了2个具有相同消费者组ID的火花流作业的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试消费群体

这是我的代码段

 公共最终课程App {私有静态最终int INTERVAL = 5000;公共静态void main(String [] args)引发异常{Map< String,Object>kafkaParams = new HashMap<>();kafkaParams.put("bootstrap.servers","xxx:9092");kafkaParams.put("key.deserializer",StringDeserializer.class);kafkaParams.put("value.deserializer",StringDeserializer.class);kafkaParams.put("auto.offset.reset",最早");kafkaParams.put("enable.auto.commit",true);kafkaParams.put("auto.commit.interval.ms","1000");kafkaParams.put("security.protocol","SASL_PLAINTEXT");kafkaParams.put("sasl.kerberos.service.name","kafka");kafkaParams.put("retries","3");kafkaParams.put(GROUP_ID_CONFIG,"mygroup");kafkaParams.put("request.timeout.ms","210000");kafkaParams.put("session.timeout.ms","180000");kafkaParams.put("heartbeat.interval.ms","3000");Collection< String>主题= Arrays.asList("venkat4");SparkConf conf =新的SparkConf();JavaStreamingContext ssc =新的JavaStreamingContext(conf,new Duration(INTERVAL));最终的JavaInputDStream< ConsumerRecord< String,String>>流=KafkaUtils.createDirectStream(ssc,LocationStrategies.PreferConsistent(),ConsumerStrategies.< String,String> Subscribe(主题,kafkaParams));stream.mapToPair(新的PairFunction< ConsumerRecord< String,String> ;, String,String>(){@Override公共Tuple2< String,String>通话(ConsumerRecord< String,String>记录){返回新的Tuple2(record.key(),record.value());}}).打印();ssc.start();ssc.awaitTermination();} 

}

当我同时运行两个Spark Streaming作业时,它会失败并显示错误

线程主"中的异常java.lang.IllegalStateException:分区venkat4-1当前没有分配在org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:251)在org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:315)在org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1170)在org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197)在org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214)在org.apache.spark.streaming.dstream.DStream $$ anonfun $ getOrCompute $ 1 $ anonfun $ 1 $$ anonfun $ apply $ 7.apply(DStream.scala:341)中在org.apache.spark.streaming.dstream.DStream $$ anonfun $ getOrCompute $ 1 $ anonfun $ 1 $$ anonfun $ apply $ 7.apply(DStream.scala:341)中在scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)在org.apache.spark.streaming.dstream.DStream $$ anonfun $ getOrCompute $ 1 $$ anonfun $ 1.apply(DStream.scala:340)在org.apache.spark.streaming.dstream.DStream $$ anonfun $ getOrCompute $ 1 $$ anonfun $ 1.apply(DStream.scala:340)在org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)在org.apache.spark.streaming.dstream.DStream $$ anonfun $ getOrCompute $ 1.apply(DStream.scala:335)在org.apache.spark.streaming.dstream.DStream $$ anonfun $ getOrCompute $ 1.apply(DStream.scala:333)在scala.Option.orElse(Option.scala:289)

为此

现在,所有分区仅由一个使用者使用.如果数据摄取率很高,则消费者可能会以摄取速度来缓慢消费数据.

将更多消费者添加到同一消费者组,以消费主题中的数据并提高消费率.使用此方法在Kafka分区和Spark分区之间以1:1并行性进行Spark流式传输.Spark会在内部对其进行处理.

如果使用方数量多于主题分区,则它将处于空闲状态,并且资源未得到充分利用.始终建议用户使用的分区数应小于或等于分区数.

如果添加更多进程/线程,Kafka将重新平衡.如果任何使用者或代理无法将心跳发送到ZooKeeper,则可以通过Kafka群集重新配置ZooKeeper.

每当代理失败或向现有主题添加新分区时,Kafka都会重新平衡分区存储.这是kafka特有的方法,如何在代理中的各个分区之间平衡数据.

火花流在Kafka分区和Spark分区之间提供了简单的1:1并行性.如果不使用ConsumerStragies.Assign提供任何分区详细信息,则使用给定主题的所有分区.

Kafka将主题的分区分配给组中的使用者,因此每个分区仅由组中的一个使用者使用.Kafka保证只有单个消费者才能阅读一条消息在小组中.

当您启动第二个Spark Streaming作业时,另一个使用者尝试使用来自相同使用者组ID的相同分区.因此会引发错误.

  val alertTopics = Array("testtopic")val kafkaParams = Map [String,Object]("bootstrap.servers"->sparkJobConfig.kafkaBrokers,"key.deserializer"->classOf [StringDeserializer],"value.deserializer"->classOf [StringDeserializer],"group.id"->sparkJobConfig.kafkaConsumerGroup,"auto.offset.reset"->最新的")val streamContext = new StreamingContext(sparkContext,Seconds(sparkJobConfig.streamBatchInterval.toLong))val streamData = KafkaUtils.createDirectStream(streamContext,PreferConsistent,Subscribe [String,String](alertTopics,kafkaParams)) 

如果要使用分区特定的spark作业,请使用以下代码.

  val topicPartitionsList =列表(新的TopicPartition("topic",1))val alertReqStream1 = KafkaUtils.createDirectStream(streamContext,PreferConsistent,ConsumerStrategies.Assign(topicPartitionsList,kafkaParams)) 

https://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html#consumerstrategies

消费者可以使用samegroup.id加入群组.

  val topicPartitionsList = List(新的TopicPartition("topic",3),新的TopicPartition("topic",4))val alertReqStream2 = KafkaUtils.createDirectStream(streamContext,PreferConsistent,ConsumerStrategies.Assign(topicPartitionsList,kafkaParams)) 

再添加两个使用者将添加到相同的groupid中.

请阅读Spark-Kafka集成指南. https://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html

希望这会有所帮助.

I am trying to experiment on consumer groups

Here is my code snippet

public final class App {

private static final int INTERVAL = 5000;

public static void main(String[] args) throws Exception {

    Map<String, Object> kafkaParams = new HashMap<>();
    kafkaParams.put("bootstrap.servers", "xxx:9092");
    kafkaParams.put("key.deserializer", StringDeserializer.class);
    kafkaParams.put("value.deserializer", StringDeserializer.class);
    kafkaParams.put("auto.offset.reset", "earliest");
    kafkaParams.put("enable.auto.commit", true);
    kafkaParams.put("auto.commit.interval.ms","1000");
    kafkaParams.put("security.protocol","SASL_PLAINTEXT");
    kafkaParams.put("sasl.kerberos.service.name","kafka");
    kafkaParams.put("retries","3");
    kafkaParams.put(GROUP_ID_CONFIG,"mygroup");
    kafkaParams.put("request.timeout.ms","210000");
    kafkaParams.put("session.timeout.ms","180000");
    kafkaParams.put("heartbeat.interval.ms","3000");
    Collection<String> topics = Arrays.asList("venkat4");

    SparkConf conf = new SparkConf();
    JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(INTERVAL));


    final JavaInputDStream<ConsumerRecord<String, String>> stream =
            KafkaUtils.createDirectStream(
                    ssc,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
            );

    stream.mapToPair(
            new PairFunction<ConsumerRecord<String, String>, String, String>() {
                @Override
                public Tuple2<String, String> call(ConsumerRecord<String, String> record) {
                    return new Tuple2<>(record.key(), record.value());
                }
            }).print();


    ssc.start();
    ssc.awaitTermination();


}

}

When I run two of this spark streaming job concurrently it fails with error

Exception in thread "main" java.lang.IllegalStateException: No current assignment for partition venkat4-1 at org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:251) at org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:315) at org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1170) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197) at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333) at scala.Option.orElse(Option.scala:289)

Per this https://www.wisdomjobs.com/e-university/apache-kafka-tutorial-1342/apache-kafka-consumer-group-example-19004.html creation of separate instance of kafka consumer with same group will create a rebalance of partitions. I believe the rebalance is not being tolerated by the consumer. How should I fix this

Below is the command used

SPARK_KAFKA_VERSION=0.10 spark2-submit --num-executors 2 --master yarn --deploy-mode client --files jaas.conf#jaas.conf,hive.keytab#hive.keytab --driver-java-options "-Djava.security.auth.login.config=./jaas.conf" --class Streaming.App --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./jaas.conf" --conf spark.streaming.kafka.consumer.cache.enabled=false 1-1.0-SNAPSHOT.jar

解决方案

Per this https://www.wisdomjobs.com/e-university/apache-kafka-tutorial-1342/apache-kafka-consumer-group-example-19004.html creation of separate instance of kafka consumer with same group will create a rebalance of partitions. I believe the rebalance is not being tolerated by the consumer. How should I fix this

Now all the partitions are consumed by only one consumer. If data ingestion rate is high, consumer might be slow to consume data at the speed of ingestion.

Adding more consumer to the same consumergroup to consume data from a topic and increase the consumption rate. Spark streaming using this approach 1:1 parallelism between Kafka partitions and Spark partitions. Spark will handle it internally.

If you have more number number of consumers than topic partitions, it will be in idle state and resources are under-utilized. Always recommended the consumer should be less than or equal to partitions count.

Kafka will re-balance, if more processes/threads are added. The ZooKeeper can be reconfigured by Kafka cluster, if any consumer or broker fails to send heartbeat to ZooKeeper.

Kafka rebalance the partitions storage whenever any broker failure or adding new partition to the existing topic. This is kafka specific how to balance the data across partitions in the brokers.

Spark streaming provides simple 1:1 parallelism between Kafka partitions and Spark partitions. If you are not providing any partition details using ConsumerStragies.Assign, consumes from all the partitions of the given topic.

Kafka assigns the partitions of a topic to the consumer in a group, so that each partition is consumed by exactly one consumer in the group. Kafka guarantees that a message is only ever read by a single consumer in the group.

When you start the second spark streaming job, another consumer try to consume the same partition from the same consumer groupid. So it throws the error.

val alertTopics = Array("testtopic")

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> sparkJobConfig.kafkaBrokers,
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> sparkJobConfig.kafkaConsumerGroup,
  "auto.offset.reset" -> "latest"
)

val streamContext = new StreamingContext(sparkContext, Seconds(sparkJobConfig.streamBatchInterval.toLong))

val streamData = KafkaUtils.createDirectStream(streamContext, PreferConsistent, Subscribe[String, String](alertTopics, kafkaParams))

If you want to consume partition specific spark job, use the following code.

val topicPartitionsList =  List(new TopicPartition("topic",1))

val alertReqStream1 = KafkaUtils.createDirectStream(streamContext, PreferConsistent, ConsumerStrategies.Assign(topicPartitionsList, kafkaParams))

https://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html#consumerstrategies

Consumers can join a group by using the samegroup.id.

val topicPartitionsList =  List(new TopicPartition("topic",3), new TopicPartition("topic",4))

    val alertReqStream2 = KafkaUtils.createDirectStream(streamContext, PreferConsistent, ConsumerStrategies.Assign(topicPartitionsList, kafkaParams))

Adding two more consumers is adding into same groupid.

Please read the Spark-Kafka integration guide. https://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html

Hope this helps.

这篇关于2个具有相同消费者组ID的火花流作业的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆