Kafka Spark Stream 抛出异常:没有当前的分区分配 [英] Kafka Spark Stream throws Exception:No current assignment for partition

查看:30
本文介绍了Kafka Spark Stream 抛出异常:没有当前的分区分配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以下是我创建 spark kafka 流的 Scala 代码:

Below is my scala code to create spark kafka stream:

val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "server110:2181,server110:9092",
"zookeeper" -> "server110:2181",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "example",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("ABTest")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)

但运行10小时后,抛出异常:

But after run for 10 hours, it throws exceptions:

2017-02-10 10:56:20,000 INFO [JobGenerator] internals.ConsumerCoordinator: **Revoking previously assigned partitions** [ABTest-0, ABTest-1] for group example
2017-02-10 10:56:20,000 INFO [JobGenerator] internals.AbstractCoordinator: (Re-)joining group example
2017-02-10 10:56:20,011 INFO [JobGenerator] internals.AbstractCoordinator: (Re-)joining group example
2017-02-10 10:56:40,057 INFO [JobGenerator] internals.AbstractCoordinator: Successfully joined group example with generation 5
2017-02-10 10:56:40,058 INFO [JobGenerator] internals.ConsumerCoordinator: **Setting newly assigned partitions** [ABTest-1] for group example
2017-02-10 10:56:40,080 ERROR [JobScheduler] scheduler.JobScheduler: Error generating jobs for time 1486695380000 ms
java.lang.IllegalStateException: No current assignment for partition ABTest-0
at org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:231)
at org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:295)
at org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1169)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:179)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:196)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
at scala.Option.orElse(Option.scala:289)
at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:248)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:246)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:246)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:182)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

显然,分区 ABTestMsg-0 已经为这个消费者撤销了,但似乎火花流消费者没有意识到这一点,并继续消费这个撤销的 topic-partition 的数据,所以发生了异常和总数火花作业中止.我认为 kafka rebalance 事件很正常,如何修改我的代码让 Spark Streaming 正确处理 partition-revoke 事件?

Obviously , The partition ABTestMsg-0 has already be revoked for this consumer, but it seems that the spark streaming consumer are not aware of that and continue to consume data of this revoked topic-partition , so the exception occurs and the total spark job aborted. I think the kafka rebalance event is very normal , how can I modify my code to make Spark streaming deal with the partition-revoke event correctly?

推荐答案

我花了一段时间才弄明白这个问题.这是因为重新平衡.使用 Assign 代替 Subscribe ConsumerStrategy.

It took me a while to figure this one out. It happens because of rebalancing. Instead of Subscribe ConsumerStrategy use Assign.

这篇关于Kafka Spark Stream 抛出异常:没有当前的分区分配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆