Spark-java.lang.ClassCastException:无法分配scala.collection.immutable.List $ SerializationProxy的实例 [英] Spark - java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy

查看:189
本文介绍了Spark-java.lang.ClassCastException:无法分配scala.collection.immutable.List $ SerializationProxy的实例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有Schema的数据框:

I have a dataframe with Schema :

root
 |-- QUERY: string (nullable = true)
 |-- TYPE: string (nullable = true)
 |-- DEVICE: string (nullable = true)
 |-- PURCHASE_UNITS_SUM: double (nullable = true)
 |-- CLICK_SUM: decimal(38,18) (nullable = true)
 |-- IMPRESSION_COUNT: long (nullable = false)
 |-- CLICK_THROUGH_RATE: decimal(38,2) (nullable = true)
 |-- PURCHASE_RATE: double (nullable = true)

我正在尝试将某些列转换为地图(设备->列):

I am trying to convert some columns to map (device -> columns) :

val result = df.withColumn("CLICK_THROUGH_RATE_MAP",
        map(col("DEVICE"), col("CLICK_THROUGH_RATE")))
      .withColumn("PURCHASE_RATE_MAP",
        map(col("DEVICE"), col("PURCHASE_RATE")))
      .withColumn("PURCHASE_SUM_MAP",
        map(col("DEVICE"), col("PURCHASE_UNITS_SUM")))
      .withColumn("CLICK_SUM_MAP",
        map(col("DEVICE"), col("CLICK_SUM")))
      .withColumn("IMPRESSION_SUM_MAP",
        map(col("DEVICE"), col("IMPRESSION_COUNT")))
      .groupBy("QUERY", "TYPE")
      .agg(collect_list("CLICK_THROUGH_RATE_MAP"),
        collect_list("PURCHASE_RATE_MAP"),
          collect_list("PURCHASE_SUM_MAP"),
          collect_list("CLICK_SUM_MAP"),
          collect_list("IMPRESSION_SUM_MAP"))
      .as[(String, String,
        Seq[Map[String, Double]],
        Seq[Map[String, Double]],
        Seq[Map[String, Double]],
        Seq[Map[String, Double]],
        Seq[Map[String, Double]])]
      .map {
        case (query, type, list1, list2, list3, list4, list5) =>
          (query, type,
            list1.reduce(_ ++ _),
            list2.reduce(_ ++ _),
            list3.reduce(_ ++ _),
            list4.reduce(_ ++ _),
            list5.reduce(_ ++ _))
      }.
      toDF("QUERY",
        "TYPE",
        "CLICK_THROUGH_RATE",
        "PURCHASE_RATE",
        "PURCHASE_UNITS",
        "CLICKS",
        "IMPRESSIONS")
  } 

这给了我-

root
 |-- QUERY: string (nullable = true)
 |-- TYPE: string (nullable = true)
 |-- CLICK_THROUGH_RATE: map (nullable = true)
 |    |-- key: string
 |    |-- value: string (valueContainsNull = true)
 |-- PURCHASE_RATE: map (nullable = true)
 |    |-- key: string
 |    |-- value: string (valueContainsNull = true)
 |-- PURCHASE_UNITS: map (nullable = true)
 |    |-- key: string
 |    |-- value: string (valueContainsNull = true)
 |-- CLICKS: map (nullable = true)
 |    |-- key: string
 |    |-- value: string (valueContainsNull = true)
 |-- IMPRESSIONS: map (nullable = true)
 |    |-- key: string
 |    |-- value: string (valueContainsNull = true)

但是当我执行result.count时,我会收到此异常-

But when I do result.count, I am getting this exception -

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 63.0 failed 4 times, most recent failure: Lost task 0.3 in stage 63.0 (TID 62365, ip-10-0-1-52.ec2.internal, executor 2): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
    at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287)
    at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2347)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2341)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
    at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:490)
    at sun.reflect.GeneratedMethodAccessor232.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2232)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2341)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2341)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2041)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2029)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2028)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2028)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:966)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2262)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2211)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2200)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:777)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
  at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:401)
  at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
  at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3389)
  at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
  at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
  at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2550)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2764)
  at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
  at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:753)
  at org.apache.spark.sql.Dataset.show(Dataset.scala:730)
  ... 53 elided
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
  at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287)
  at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417)
  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2347)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2341)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
  at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:490)
  at sun.reflect.GeneratedMethodAccessor232.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2232)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2341)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2341)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2265)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2123)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
  at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
  at org.apache.spark.scheduler.Task.run(Task.scala:123)
  at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  ... 3 more

我做错什么了吗?

推荐答案

HashMap存在相同的问题.

There is the same problem with HashMap.

我在这里找到了解决方案: https://gist.github.com/ramn/5566596

I found the solution here : https://gist.github.com/ramn/5566596

您必须用新的类ObjectObjectStreamWithCustomClassLoader替换代码中的ObjectInputStream类

You have to replace the class ObjectInputStream in your code by a new class : ObjectInputStreamWithCustomClassLoader

    class ObjectInputStreamWithCustomClassLoader(
      fileInputStream: FileInputStream
    ) extends ObjectInputStream(fileInputStream) {
      override def resolveClass(desc: java.io.ObjectStreamClass): Class[_] = {
        try { Class.forName(desc.getName, false, getClass.getClassLoader) }
        catch { case ex: ClassNotFoundException => super.resolveClass(desc) }
      }
    }

这篇关于Spark-java.lang.ClassCastException:无法分配scala.collection.immutable.List $ SerializationProxy的实例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆