使用spark从案例类列表中创建一个配置单元表 [英] create a hive table from list of case class using spark

查看:57
本文介绍了使用spark从案例类列表中创建一个配置单元表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从案例类列表中创建一个配置单元表.但是它不允许指定数据库名称.抛出以下错误.

I am trying to create a hive table from the list of case class. But it does not allow to specify the database name. Below error is being thrown.

火花版本:1.6.2

Spark Version: 1.6.2

错误:诊断:用户类引发异常:org.apache.spark.sql.AnalysisException:未找到表:mytempTable;第1行pos 58

Error: diagnostics: User class threw exception: org.apache.spark.sql.AnalysisException: Table not found: mytempTable; line 1 pos 58

请让我知道将map方法的输出保存到与case类具有相同结构的配置单元表中的方法.

Please let me know the way to save the output of map method to a hive table withe same structure as case class.

注意:正在使用给定输入的map方法(实际上是getElem()方法)填充recordArray列表

Note: recordArray list is being populated in the map method (in getElem() method infact) for the input given

    object testing extends Serializable {
          var recordArray=List[Record]();
           def main(args:Array[String])
          {
          val inputpath = args(0).toString();
          val outputpath=args(1).toString();


          val conf = new SparkConf().setAppName("jsonParsing")
          val sc = new SparkContext(conf)
          val sqlContext= new SQLContext(sc)
          val hsc = new HiveContext(sc)

          val input = sc.textFile(inputpath)
          //val input=sc.textFile("file:///Users/Documents/Work/data/mydata.txt")
         // input.collect().foreach(println)
         val = input.map(data=>getElem(parse(data,false))) 
   val recordRDD = sc.parallelize(recordArray)
//
     val recordDF=sqlContext.createDataFrame(recordRDD)
    recordDF.registerTempTable("mytempTable") 
     hsc.sql("create table dev_db.ingestion as select * from mytempTable")
        }

    case class Record(summary_key: String, key: String,array_name_position:Int,Parent_Level_1:String,Parent_level_2:String,Parent_Level_3:String,Parent_level_4:String,Parent_level_5:String,
            param_name_position:Integer,Array_name:String,paramname:String,paramvalue:String)
    }

推荐答案

您需要拥有/创建一个HiveContext

you need to have/create a HiveContext

import org.apache.spark.sql.hive.HiveContext;
HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(sc.sc());

然后直接保存数据框或选择要存储为配置单元表的列

Then directly save dataframe or select the columns to store as hive table

recordDF是数据框

recordDF is dataframe

recordDF.write().mode("overwrite").saveAsTable("schemaName.tableName");

recordDF.select(recordDF.col("col1"),recordDF.col("col2"), recordDF.col("col3")) .write().mode("overwrite").saveAsTable("schemaName.tableName");

recordDF.write().mode(SaveMode.Overwrite).saveAsTable("dbName.tableName");

保存模式为追加/忽略/覆盖/ErrorIfExists

SaveModes are Append/Ignore/Overwrite/ErrorIfExists

我在此处添加了Spark文档中有关HiveContext的定义,

I added here the definition for HiveContext from Spark Documentation,

这篇关于使用spark从案例类列表中创建一个配置单元表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆