org.apache.spark.SparkException:任务不序列化 [英] org.apache.spark.SparkException: Task not serializable

查看:854
本文介绍了org.apache.spark.SparkException:任务不序列化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个工作code例如:

  JavaPairDStream<字符串,字符串>消息= KafkaUtils.createStream(javaStreamingContext,zkQuorum,组topicMap);
    messages.print();
    JavaDStream<串GT;线= messages.map(新功能< Tuple2<字符串,字符串>中字符串>(){
        @覆盖
        公共字符串调用(Tuple2<字符串,字符串> tuple2){
            返回tuple2._2();
        }
    });

我收到以下错误:

 错误:
org.apache.spark.SparkException:任务不序列化
    在org.apache.spark.util.ClosureCleaner $ .ensureSerializable(ClosureCleaner.scala:166)
    在org.apache.spark.util.ClosureCleaner $清洁机壳(ClosureCleaner.scala:158)
    在org.apache.spark.SparkContext.clean(SparkContext.scala:1435)
    在org.apache.spark.streaming.dstream.DStream.map(DStream.scala:438)
    在org.apache.spark.streaming.api.java.JavaDStreamLike $ class.map(JavaDStreamLike.scala:140)
在org.apache.spark.streaming.api.java.JavaPairDStream.map(JavaPairDStream.scala:46)


解决方案

由于您使用定义一个匿名内部类的地图功能,包含类也必须是可序列化。定义你的地图功能作为一个单独的类或使其静态内部类。从Java文档(<一个href=\"http://docs.oracle.com/javase/8/docs/platform/serialization/spec/serial-arch.html\">http://docs.oracle.com/javase/8/docs/platform/serialization/spec/serial-arch.html):


  

请注意 - 内部类(即嵌套类的非静态成员类),包括本地和匿名类的序列化,是强烈反对的原因有几个。因为在非静态上下文中声明的内部类包含对封闭类的实例,这种序列化一个内部类的实例,将导致其相关的外部类实例的序列化以及隐含的非瞬态引用。


This is a working code example:

JavaPairDStream<String, String> messages = KafkaUtils.createStream(javaStreamingContext, zkQuorum, group, topicMap);
    messages.print();
    JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
        @Override
        public String call(Tuple2<String, String> tuple2) {
            return tuple2._2();
        }
    });

I get the below error:

ERROR:
org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1435)
    at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:438)
    at org.apache.spark.streaming.api.java.JavaDStreamLike$class.map(JavaDStreamLike.scala:140)
at org.apache.spark.streaming.api.java.JavaPairDStream.map(JavaPairDStream.scala:46)

解决方案

Since you're defining your map function using an anonymous inner class, the containing class must also be Serializable. Define your map function as a separate class or make it a static inner class. From the Java documentation (http://docs.oracle.com/javase/8/docs/platform/serialization/spec/serial-arch.html):

Note - Serialization of inner classes (i.e., nested classes that are not static member classes), including local and anonymous classes, is strongly discouraged for several reasons. Because inner classes declared in non-static contexts contain implicit non-transient references to enclosing class instances, serializing such an inner class instance will result in serialization of its associated outer class instance as well.

这篇关于org.apache.spark.SparkException:任务不序列化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆