SPARK 1.6.1:在 DataFrame 上评估分类器时,任务不可序列化 [英] SPARK 1.6.1: Task not serializable when evaluating a classifier on a DataFrame
问题描述
我有一个 DataFrame,我将它映射到 () 的 RDD 中以测试 SVMModel.
I have a DataFrame, I map it into an RDD of () to test an SVMModel.
我使用的是 Zeppelin 和 Spark 1.6.1
I am using Zeppelin, and Spark 1.6.1
这是我的代码:
val loadedSVMModel = SVMModel.load(sc, pathToSvmModel)
// Clear the default threshold.
loadedSVMModel.clearThreshold()
// Compute raw scores on the test set.
val scoreAndLabels = df.select($"features", $"label")
.map { case Row(features:Vector, label: Double) =>
val score = loadedSVMModel.predict(features)
(score,label)
}
// Get evaluation metrics.
val metrics = new BinaryClassificationMetrics(scoreAndLabels)
val auROC = metrics.areaUnderROC()
println("Area under ROC = " + auROC)
在执行代码时,我有一个 org.apache.spark.SparkException: Task not serializable;
我很难理解为什么会发生这种情况以及如何修复它.
When executing the code I have a org.apache.spark.SparkException: Task not serializable;
and I have a hard time understanding why this is happening and how can I fix it.
- 是不是因为我使用了 Zeppelin?
- 是不是因为原来的DataFrame?
我已经执行了 Spark 编程指南中的 SVM 示例,它工作得很好.所以原因应该与以上几点有关......我猜.
I have executed the SVM example in the Spark Programming Guide, and it worked perfectly. So the reason should be related to one of the points above... I guess.
这是异常堆栈的一些相关元素:
Here is the some relevant elements of the Exception stack:
Caused by: java.io.NotSerializableException: org.apache.spark.sql.Column
Serialization stack:
- object not serializable (class: org.apache.spark.sql.Column, value: (sum(CASE WHEN (domainIndex = 0) THEN sumOfScores ELSE 0),mode=Complete,isDistinct=false) AS 0#100278)
- element of array (index: 0)
- array (class [Lorg.apache.spark.sql.Column;, size 372)
我没有发布完整的异常堆栈,因为 Zeppelin 倾向于显示很长的不相关文本.如果你想让我跳过完整的例外,请告诉我.
I didn't post the full exception stack, because Zeppelin tend to show a very long not relevant text. please let me know if you want me to past the full exception.
其他信息
使用 VectorAssembler() 生成特征向量,如下所示
The feature vectors are generated using a VectorAssembler() as follow
// Prepare vector assemble
val vecAssembler = new VectorAssembler()
.setInputCols(arrayOfIndices)
.setOutputCol("features")
// Aggregation expressions
val exprs = arrayOfIndices
.map(c => sum(when($"domainIndex" === c, $"sumOfScores")
.otherwise(lit(0))).alias(c))
val df = vecAssembler
.transform(anotherDF.groupBy($"userID", $"val")
.agg(exprs.head, exprs.tail: _*))
.select($"userID", $"features", $"val")
.withColumn("label", sqlCreateLabelValue($"val"))
.drop($"val").drop($"userID")
推荐答案
问题的根源实际上与您使用的 DataFrame
甚至与 Zeppelin 无关.更多的是代码组织的问题,再加上同一范围内不可序列化对象的存在.
The source of the problem is actually not related to the DataFrame
you use or even directly to Zeppelin. It is more a matter of code organization combined with existence of non-serializable object in the same scope.
由于您使用交互式会话,因此所有对象都定义在同一范围内并成为闭包的一部分.它包括 exprs
,它看起来像一个 Seq[Column]
,其中 Column
是不可序列化的.
Since you use interactive session all objects are defined in the same scope and become a part of the closure. It includes exprs
which looks like a Seq[Column]
where Column
is not serializable.
操作SQL表达式不是问题,因为exprs
只在本地使用,但是当你下拉到RDD
操作时就会有问题.exprs
作为闭包的一部分包含在一个表达式中.重现此行为的最简单方法(ColumnName
是 Column
的子类之一)如下所示:
It is not a problem when operate on SQL expressions because exprs
are used only locally, but becomes problematic when you drop down to RDD
operations. exprs
is included as a part of a closure and leads to an expression. The simplest way you can reproduce this behavior (ColumnName
is one the subclasses of Column
) is something like this:
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.0.0-SNAPSHOT
/_/
Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_91)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val df = Seq(1, 2, 3).toDF("x")
df: org.apache.spark.sql.DataFrame = [x: int]
scala> val x = $"x"
x: org.apache.spark.sql.ColumnName = x
scala> def f(x: Any) = 0
f: (x: Any)Int
scala> df.select(x).rdd.map(f _)
org.apache.spark.SparkException: Task not serializable
...
Caused by: java.io.NotSerializableException: org.apache.spark.sql.ColumnName
Serialization stack:
- object not serializable (class: org.apache.spark.sql.ColumnName, value: x)
...
您可以尝试解决此问题的一种方法是将 exprs
标记为瞬态:
One way you can try to approach this problem is to mark exprs
as transient:
@transient val exprs: Seq[Column] = ???
在我们的最小示例中也能正常工作:
which works fine as well in our minimal example:
scala> @transient val x = $"x"
x: org.apache.spark.sql.ColumnName = x
scala> df.select(x).rdd.map(f _)
res1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[8] at map at <console>:30
这篇关于SPARK 1.6.1:在 DataFrame 上评估分类器时,任务不可序列化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!