遇到缺少的功能时,Apache Spark会引发NullPointerException [英] Apache Spark throws NullPointerException when encountering missing feature

查看:96
本文介绍了遇到缺少的功能时,Apache Spark会引发NullPointerException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

索引要素中的字符串列时,我对PySpark有一个奇怪的问题.这是我的tmp.csv文件:

I have a bizarre issue with PySpark when indexing column of strings in features. Here is my tmp.csv file:

x0,x1,x2,x3 
asd2s,1e1e,1.1,0
asd2s,1e1e,0.1,0
,1e3e,1.2,0
bd34t,1e1e,5.1,1
asd2s,1e3e,0.2,0
bd34t,1e2e,4.3,1

我在其中缺少'x0'的值. 首先,我使用pyspark_csv将csv文件中的功能读取到DataFrame中: https://github.com/seahboonsiew/pyspark-csv 然后用StringIndexer索引x0:

where I have one missing value for 'x0'. At first, I'm reading features from csv file into DataFrame using pyspark_csv: https://github.com/seahboonsiew/pyspark-csv then indexing x0 with StringIndexer:

import pyspark_csv as pycsv
from pyspark.ml.feature import StringIndexer

sc.addPyFile('pyspark_csv.py')

features = pycsv.csvToDataFrame(sqlCtx, sc.textFile('tmp.csv'))
indexer = StringIndexer(inputCol='x0', outputCol='x0_idx' )
ind = indexer.fit(features).transform(features)
print ind.collect()

调用"ind.collect()"时,Spark抛出java.lang.NullPointerException.一切都适用于完整的数据集,例如适用于"x1".

when calling ''ind.collect()'' Spark throws java.lang.NullPointerException. Everything works fine for complete data set, e.g., for 'x1' though.

有人知道这是什么原因以及如何解决?

Does anyone have a clue what is causing this and how to fix it?

提前谢谢!

谢尔盖

更新:

我使用Spark 1.5.1.确切的错误:

I use Spark 1.5.1. The exact error:

File "/spark/spark-1.4.1-bin-hadoop2.6/python/pyspark/sql/dataframe.py", line 258, in show
print(self._jdf.showString(n))

File "/spark/spark-1.4.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__

File "/spark/spark-1.4.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value

py4j.protocol.Py4JJavaError: An error occurred while calling o444.showString.
: java.lang.NullPointerException
at org.apache.spark.sql.types.Metadata$.org$apache$spark$sql$types$Metadata$$hash(Metadata.scala:208)
at org.apache.spark.sql.types.Metadata$$anonfun$org$apache$spark$sql$types$Metadata$$hash$2.apply(Metadata.scala:196)
at org.apache.spark.sql.types.Metadata$$anonfun$org$apache$spark$sql$types$Metadata$$hash$2.apply(Metadata.scala:196)
... etc

我尝试创建相同的DataFrame而不读取csv文件,

I've tried to create the same DataFrame without reading csv file,

df = sqlContext.createDataFrame(
  [('asd2s','1e1e',1.1,0), ('asd2s','1e1e',0.1,0), 
  (None,'1e3e',1.2,0), ('bd34t','1e1e',5.1,1), 
  ('asd2s','1e3e',0.2,0), ('bd34t','1e2e',4.3,1)],
  ['x0','x1','x2','x3'])

,并且给出相同的错误.稍有不同的示例就可以了,

and it gives the same error. A bit different example works fine,

df = sqlContext.createDataFrame(
  [(0, None, 1.2), (1, '06330986ed', 2.3), 
  (2, 'b7584c2d52', 2.5), (3, None, .8), 
  (4, 'bd17e19b3a', None), (5, '51b5c0f2af', 0.1)],
  ['id', 'x0', 'num'])

// after indexing x0

+---+----------+----+------+
| id|        x0| num|x0_idx|
+---+----------+----+------+
|  0|      null| 1.2|   0.0|
|  1|06330986ed| 2.3|   2.0|
|  2|b7584c2d52| 2.5|   4.0|
|  3|      null| 0.8|   0.0|
|  4|bd17e19b3a|null|   1.0|
|  5|51b5c0f2af| 0.1|   3.0|
+---+----------+----+------+

更新2:

我刚刚在Scala中发现了相同的问题,所以我想这是Spark漏洞,而不仅仅是PySpark.尤其是数据帧

I've just discovered the same issue in Scala, so I guess it's Spark bug not PySpark only. In particular, data frame

val df = sqlContext.createDataFrame(
  Seq(("asd2s","1e1e",1.1,0), ("asd2s","1e1e",0.1,0), 
      (null,"1e3e",1.2,0), ("bd34t","1e1e",5.1,1), 
      ("asd2s","1e3e",0.2,0), ("bd34t","1e2e",4.3,1))
).toDF("x0","x1","x2","x3")

在索引"x0"功能时引发java.lang.NullPointerException.此外,在以下数据帧中为"x0"编制索引时

throws java.lang.NullPointerException when indexing 'x0' feature. Moreover, when indexing 'x0' in the following data frame

val df = sqlContext.createDataFrame(
  Seq((0, null, 1.2), (1, "b", 2.3), 
      (2, "c", 2.5), (3, "a", 0.8), 
      (4, "a", null), (5, "c", 0.1))
).toDF("id", "x0", "num")

我有"java.lang.UnsupportedOperationException:不支持任何类型的模式",这是由于在第5个向量中缺少"num"值引起的.如果用一个数字代替它,那么即使在第一个向量中缺少值,一切也都可以正常工作.

I've got 'java.lang.UnsupportedOperationException: Schema for type Any is not supported' which is caused by missing 'num' value in 5th vector. If one replaces it with a number everything works well even having missing value in the 1st vector.

我还尝试了旧版本的Spark(1.4.1),结果是相同的.

I've also tried older versions of Spark (1.4.1), and the result is the same.

推荐答案

您正在使用的模块似乎将空字符串转换为null,并且在某些时候与下游处理发生了混乱.乍看之下,它看起来像是PySpark错误.

It looks like module you're using converts empty strings to nulls and it is messing at some point with downstream processing. At first glance it looks like a PySpark bug.

如何解决?一个简单的解决方法是在索引之前删除null:

How to fix it? A simple workaround is to either drop nulls before indexing:

features.na.drop()

或使用某些占位符替换空值:

or replace nulls with some placeholder:

from pyspark.sql.functions import col, when

features.withColumn(
    "x0", when(col("x0").isNull(), "__SOME_PLACEHOLDER__").otherwise(col("x0")))

此外,您可以使用 spark-csv .它高效,经过测试,而且不能将空字符串转换为nulls.

features = (sqlContext.read
    .format('com.databricks.spark.csv')
    .option("inferSchema", "true")
    .option("header", "true")
    .load("tmp.csv"))

这篇关于遇到缺少的功能时,Apache Spark会引发NullPointerException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆