spark-csv 中的自定义模式在 spark 1.4.1 中抛出错误 [英] Custom schema in spark-csv throwing error in spark 1.4.1

查看:26
本文介绍了spark-csv 中的自定义模式在 spark 1.4.1 中抛出错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试在 spark 1.4.1 中的 spark-shell 中使用 spark -csv 包处理 CSV 文件.

I trying to process CSV file using spark -csv package in spark-shell in spark 1.4.1.

scala> import org.apache.spark.sql.hive.HiveContext                                                                                                  
import org.apache.spark.sql.hive.HiveContext                                                                                                         

scala> import org.apache.spark.sql.hive.orc._                                                                                                        
import org.apache.spark.sql.hive.orc._                                                                                                               

scala> import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType};                                                         
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType}                                                                 

scala> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)                                                                               
15/12/21 02:06:24 WARN SparkConf: The configuration key 'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 and and may be removed in the future. Please use the new key 'spark.yarn.am.waitTime' instead.                                                                       
15/12/21 02:06:24 INFO HiveContext: Initializing execution hive, version 0.13.1                                                                      
hiveContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@74cba4b                                                   

scala> val customSchema = StructType(Seq(StructField("year", IntegerType, true),StructField("make", StringType, true),StructField("model", StringType, true),StructField("comment", StringType, true),StructField("blank", StringType, true)))
customSchema: org.apache.spark.sql.types.StructType = StructType(StructField(year,IntegerType,true), StructField(make,StringType,true), StructField(model,StringType,true), StructField(comment,StringType,true), StructField(blank,StringType,true))                                                     

scala> val customSchema = (new StructType).add("year", IntegerType, true).add("make", StringType, true).add("model", StringType, true).add("comment", StringType, true).add("blank", StringType, true)
:24: error: not enough arguments for constructor StructType: (fields: Array[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType. Unspecified value parameter fields.                                                                                                                  

val customSchema = (new StructType).add("year", IntegerType, true).add("make", StringType, true).add("model", StringType,true).add("comment", StringType, true).add("blank", StringType, true)   

推荐答案

根据 Spark 1.4.1 文档,StructType 没有无参数构造函数,这就是为什么你得到错误.您需要升级到 1.5.x 以获取无参数构造函数,或者按照您在第一个示例中的建议创建架构.

According to Spark 1.4.1 documentation there isn't a no-arg constructor for StructType, which is why you are getting the error. You need to either upgrade to 1.5.x to get the no-arg constructor or create the schema as you suggest in the first example.

val customSchema = StructType(Seq(StructField("year", IntegerType, true),StructField("make", StringType, true),StructField("model", StringType, true),StructField("comment", StringType, true),StructField("blank", StringType, true)))

这篇关于spark-csv 中的自定义模式在 spark 1.4.1 中抛出错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆