在Spark StringIndexer中处理NULL值 [英] Handling NULL values in Spark StringIndexer

查看:811
本文介绍了在Spark StringIndexer中处理NULL值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有一些分类字符串列的数据集,我想用双精度类型来表示它们.我使用StringIndexer进行此转换,并且可以使用,但是当我在另一个具有NULL值的数据集中尝试使用它时,出现了java.lang.NullPointerException错误,并且不起作用.

I have a dataset with some categorical string columns and I want to represent them in double type. I used StringIndexer for this convertion and It works but when I tried it in another dataset that has NULL values it gave java.lang.NullPointerException error and did not work.

为了更好地理解,这是我的代码:

For better understanding here is my code:

for(col <- cols){
    out_name = col ++ "_"
    var indexer = new StringIndexer().setInputCol(col).setOutputCol(out_name)
    var indexed = indexer.fit(df).transform(df)
    df = (indexed.withColumn(col, indexed(out_name))).drop(out_name)
}

那我怎么用StringIndexer解决这个NULL数据问题呢?

So how can I solve this NULL data problem with StringIndexer?

还是将NULL值的字符串类型分类数据转换为double的更好解决方案?

Or is there any better solution for converting string typed categorical data with NULL values to double?

推荐答案

火花> = 2.2

因为Spark 2.2 NULL值可以使用标准

Since Spark 2.2 NULL values can be handled with standard handleInvalid Param:

import org.apache.spark.ml.feature.StringIndexer

val df = Seq((0, "foo"), (1, "bar"), (2, null)).toDF("id", "label")
val indexer = new StringIndexer().setInputCol("label")

默认情况下(error)它将引发异常:

By default (error) it will throw an exception:

indexer.fit(df).transform(df).show

org.apache.spark.SparkException: Failed to execute user defined function($anonfun$9: (string) => double)
  at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1066)
...
Caused by: org.apache.spark.SparkException: StringIndexer encountered NULL value. To handle or skip NULLS, try setting StringIndexer.handleInvalid.
  at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$9.apply(StringIndexer.scala:251)
...

但已配置为skip

indexer.setHandleInvalid("skip").fit(df).transform(df).show

+---+-----+---------------------------+
| id|label|strIdx_46a78166054c__output|
+---+-----+---------------------------+
|  0|    a|                        0.0|
|  1|    b|                        1.0|
+---+-----+---------------------------+

keep

indexer.setHandleInvalid("keep").fit(df).transform(df).show

+---+-----+---------------------------+
| id|label|strIdx_46a78166054c__output|
+---+-----+---------------------------+
|  0|    a|                        0.0|
|  1|    b|                        1.0|
|  3| null|                        2.0|
+---+-----+---------------------------+

火花< 2.2

目前(火花1.6.1)尚未解决,但JIRA已打开( SPARK-11569 ).不幸的是,要找到一个可接受的行为并不容易. SQL NULL表示缺少/未知的值,因此任何索引都是毫无意义的.

As for now (Spark 1.6.1) this problem hasn't been resolved but there is an opened JIRA (SPARK-11569). Unfortunately it is not easy to find an acceptable behavior. SQL NULL represents a missing / unknown value so any indexing is kind of meaningless.

可能最好的方法是使用

Probably the best thing you can do is to use NA actions and either drop:

df.na.drop("column_to_be_indexed" :: Nil)

或填写:

df2.na.fill("__HEREBE_DRAGONS__", "column_to_be_indexed" :: Nil)

在使用索引器之前.

这篇关于在Spark StringIndexer中处理NULL值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆