如何在Spark作业中使用HiveContext添加jar [英] How to add jar using HiveContext in the spark job

查看:133
本文介绍了如何在Spark作业中使用HiveContext添加jar的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试添加JSONSerDe jar文件以访问json数据,将Spark作业中的JSON数据加载到配置单元表中.我的代码如下所示:

I am trying to add JSONSerDe jar file to in order to access the json data load the JSON data to hive table from the spark job. My code is shown below:

SparkConf  sparkConf = new SparkConf().setAppName("KafkaStreamToHbase");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);
        JavaStreamingContext jssc = new JavaStreamingContext(sc, Durations.seconds(10));
        final SQLContext sqlContext = new SQLContext(sc);
        final HiveContext hiveContext = new HiveContext(sc);
hiveContext.sql("ADD JAR hdfs://localhost:8020/tmp/hive-serdes-1.0-SNAPSHOT.jar");

                hiveContext.sql("LOAD DATA INPATH '/tmp/mar08/part-00000' OVERWRITE INTO TABLE testjson");

但是我遇到了以下错误:

But I end up the following error:

java.net.MalformedURLException: unknown protocol: hdfs
        at java.net.URL.<init>(URL.java:592)
        at java.net.URL.<init>(URL.java:482)
        at java.net.URL.<init>(URL.java:431)
        at java.net.URI.toURL(URI.java:1096)
        at org.apache.spark.sql.hive.client.ClientWrapper.addJar(ClientWrapper.scala:578)
        at org.apache.spark.sql.hive.HiveContext.addJar(HiveContext.scala:652)
        at org.apache.spark.sql.hive.execution.AddJar.run(commands.scala:89)
        at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
        at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
        at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
        at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
        at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
        at com.macys.apm.kafka.spark.parquet.KafkaStreamToHbase$2.call(KafkaStreamToHbase.java:148)
        at com.macys.apm.kafka.spark.parquet.KafkaStreamToHbase$2.call(KafkaStreamToHbase.java:141)
        at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:327)
        at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:327)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
        at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
        at scala.util.Try$.apply(Try.scala:161)

我能够通过蜂巢壳添加罐子.但是,当我尝试在spark作业(javacode)中使用hiveContext.sql()进行添加时,它将引发错误.快速帮助会很有帮助.

I was able to add the jar through hive shell. But it throws an error when I was trying to add using hiveContext.sql() in the spark job(javacode). Quick help will be a great helpful.

谢谢.

推荐答案

一种解决方法是,您可以在运行时通过将-jars 传递给spark-submit命令来传递udf jar.复制这些必需的jar来激发库.

One work around is you can pass the udf jars at run time by passing --jars to spark-submit command or you can copy those required jars to spark libs.

基本上,它支持文件,hdfs和ivy方案.

Basically it supports file, hdfs and ivy schemes.

您正在使用哪个版本的spark.我无法在最新版本的ClientWrapper.scala中看到addJar方法.

Which version of spark you are using. I am not able see addJar method in ClientWrapper.scala in the latest version.

这篇关于如何在Spark作业中使用HiveContext添加jar的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆