Spark SQL 2.0:NullPointerException,带有有效的PostgreSQL查询 [英] Spark SQL 2.0: NullPointerException with a valid PostgreSQL query

查看:265
本文介绍了Spark SQL 2.0:NullPointerException,带有有效的PostgreSQL查询的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个有效的PostgreSQL查询:当我在PSQL中复制/粘贴它时,会得到所需的结果。

但是,当我使用Spark SQL运行时,它会导致 NullPointerException

I have a valid PostgreSQL query : when I copy/paste it in PSQL, I get the desired result.
But when I run with Spark SQL it leads to a NullPointerException.

以下是导致错误的代码段:

Here is the snippet of code causing the error:

extractDataFrame().show()

private def extractDataFrame(): DataFrame = {
  val query =
    """(
      SELECT events.event_facebook_id, events.name, events.tariffrange,
        eventscounts.attending_count, eventscounts.declined_count, eventscounts.interested_count,
        eventscounts.noreply_count,
        artists.facebookid as artist_facebook_id, artists.likes as artistlikes,
        organizers.organizerid, organizers.likes as organizerlikes,
        places.placeid, places.capacity, places.likes as placelikes
      FROM events
        LEFT JOIN eventscounts on eventscounts.event_facebook_id = events.event_facebook_id
        LEFT JOIN eventsartists on eventsartists.event_id = events.event_facebook_id
          LEFT JOIN artists on eventsartists.artistid = artists.facebookid
        LEFT JOIN eventsorganizers on eventsorganizers.event_id = events.event_facebook_id
          LEFT JOIN organizers on eventsorganizers.organizerurl = organizers.facebookurl
        LEFT JOIN eventsplaces on eventsplaces.event_id = events.event_facebook_id
          LEFT JOIN places on eventsplaces.placefacebookurl = places.facebookurl
      ) df"""

  spark.sqlContext.read.jdbc(databaseURL, query, connectionProperties)
}

SparkSession的定义如下:

The SparkSession is defined as follows:

val databaseURL = "jdbc:postgresql://dbHost:5432/ticketapp" 
val spark = SparkSession
  .builder
  .master("local[*]")
  .appName("tariffPrediction")
  .getOrCreate()

val connectionProperties = new Properties
connectionProperties.put("user", "simon")
connectionProperties.put("password", "root")

这是完整的堆栈跟踪:

[SparkException: Job aborted due to stage failure: Task 0 in stage 27.0 failed 1 times, most recent failure: Lost task 0.0 in stage 27.0 (TID 27, localhost): java.lang.NullPointerException
    at org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter.write(UnsafeRowWriter.java:210)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:]

最令人惊讶的部分就是说,如果我在SQL查询中删除了 LEFT JOIN 子句中的一个(无论哪一个),我都不会得到任何错误...

The most surprising part is that if I remove one (whichever one) of the LEFT JOIN clauses in the SQL query, I don't get any errors...

推荐答案

我有一个非常相似的问题,而不是使用Teradata数据源rce,然后归结为DataFrame上的列可空性与基础数据不匹配(该列具有nullable = false,但是某些行在该特定字段中具有空值)。在我的情况下,原因是Teradata JDBC驱动程序未返回正确的列元数据。我尚未找到解决方法。

I have a very similar issues instead with a Teradata data source, and it came down to the column nullability on the DataFrame did not match the underlying data (the column had nullable=false, but some rows had null values in that particular field). The cause in my case was the Teradata JDBC Driver not returning the correct column metadata. I am yet to find a workaround to this.

要查看正在生成的代码(在其中抛出NPE):

To see the code that is being generated (within which the NPE is being thrown):


  • 导入org.apache.spark.sql.execution.debug ._

  • 在DataSet / DataFrame上调用.debugCodegen()

希望这会有所帮助。

这篇关于Spark SQL 2.0:NullPointerException,带有有效的PostgreSQL查询的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆