在yarn上使用spark sql hivesql时,获取java.lang.LinkageError:ClassCastException [英] Get a java.lang.LinkageError: ClassCastException when use spark sql hivesql on yarn
问题描述
这是我上传到yarn-cluster的驱动程序:
package com.baidu.spark.forhivetest
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.hive._
import org.apache.spark.SparkContext
对象ForTest {
def main(args:Array [String]){
val sc = new SparkContext()
val sqlc = new SQLContext (sc)
val hivec = new HiveContext(sc)
hivec.sql(CREATE TABLE IF NOT EXISTS newtest(time TIMESTAMP,word STRING,current_city_name STRING,content_src_name STRING,content_name STRING))
val schema = hivec.table(newtest)。schema
println(schema)
}
在配置单元文件中:我设置了 hive.metastore.uris
和 hive.metastore.warehouse.dir
在spark-sumbit上,我添加了jar
- datanucleus -api -jdo-3.2.6.jar
- datanucleus-core-3.2.10.jar
- datanucl eus-rdbms-3.2.9.jar
即使我添加了 mysql-connector-java-5.1 .38-bin.jar
和 spark-1.6.0-bin-hadoop2.6 / lib / guava-14.0.1.jar
,I仍然得到这个错误!
但是当我在ide上运行这个spark时,它会成功运行!
$ b $希望有人能帮助我! thx很多!
这是错误信息:
java .lang.LinkageError:ClassCastException:尝试使用castjar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/ javax / ws / rs / ext / RuntimeDelegate .classtojar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.class在javax上的
.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs .core.MediaType。< clinit>(MediaType.java:44)
at com.sun.jersey.core.header.MediaTypes。< clinit>(MediaTypes.java:64)
at com .sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBod yFactory.java:162)
在com.sun.jersey.api.client.Client.init(Client.java:342)
在com.sun.jersey.api.client.Client.access $ 000 (Client.java:118)
at com.sun.jersey.api.client.Client $ 1.f(Client.java:191)
at com.sun.jersey.api.client.Client $ 1 .f(Client.java:187)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.api.client.Client 。< init>(Client.java:187)
at com.sun.jersey.api.client.Client。< init>(Client.java:170)
at org.apache.hadoop .yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:268)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache .hadoop.hive.ql.hooks.ATSHook<初始化>(ATSHook.java:67)
。在sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
。在sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:57)$ su
n.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
。在java.lang.reflect.Constructor.newInstance(Constructor.java:526)
。在java.lang.Class.newInstance(类。 java:374)
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
at org.apache.hadoop.hive.ql.Driver.getHooks( Driver.java:1309)
在org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1293)
在org.apache.hadoop.hive.ql.Driver.execute( Driver.java:1347)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run( Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.spark.sql.hive.client.ClientWrapper $ $ anonfun $ runHive $ 1.apply(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper $$ anonfun $ runHive $ 1.apply(ClientWrapper.scala:473)
在org.apache.spark.sql.hive.client。 ClientWrapper $$ anonfun $ withHiveState $ 1.适用(ClientWrapper.scala:279)
。在org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1 $ 1(ClientWrapper.scala:226)
。在组织.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:225)
。在org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:268)
。在org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:473)
。在org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:605)
at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand。 scala:33)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult $ lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult( commands.scala:56)
在org.apache.spark.sql.execution.ExecutedCommand.doExecute (commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan $$ anonfun $ execute $ 5.apply(SparkPlan.scala:132)
at org.apache.spark.sql .execution.SparkPlan $$ anonfun $ execute $ 5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:150)
at org。 apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd $ lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame上的org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
。< init>(DataFrame.scala:145)
org.apache.spark.sql.DataFrame。< init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame $ .apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at com.baidu.spark.forhivetest.ForTest $ .main(ForTest.scala:12)
at com。 baidu.spark.forhivetest.ForTest.main(F orTest.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)维持在sun.reflect.DelegatingMethodAccessorImpl sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
。 invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster $$ anon $ 2。运行(ApplicationMaster.scala:542)
16/03/22 17:04:32 INFO yarn.ApplicationMaster:最终应用状态:FAILED,exitCode:15,(原因:用户类引发异常:java.lang.LinkageError :ClassCastException:尝试使用castjar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file :/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.class)
这与您的课程路径有关。尽量不要建立一个胖罐子。
This is the driver i upload to yarn-cluster:
package com.baidu.spark.forhivetest
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.hive._
import org.apache.spark.SparkContext
object ForTest {
def main(args : Array[String]){
val sc = new SparkContext()
val sqlc = new SQLContext(sc)
val hivec = new HiveContext(sc)
hivec.sql("CREATE TABLE IF NOT EXISTS newtest (time TIMESTAMP,word STRING,current_city_name STRING,content_src_name STRING,content_name STRING)")
val schema = hivec.table("newtest").schema
println(schema)
}
In hive config file: i set the hive.metastore.uris
and hive.metastore.warehouse.dir
On spark-sumbit I do added jars
- datanucleus-api-jdo-3.2.6.jar
- datanucleus-core-3.2.10.jar
- datanucleus-rdbms-3.2.9.jar
Even if I added the mysql-connector-java-5.1.38-bin.jar
and spark-1.6.0-bin-hadoop2.6/lib/guava-14.0.1.jar
, I still get this error!
But when i run this spark on ide it works successly!
Hope someone can help me ! thx a lot!
This is the error information:
java.lang.LinkageError: ClassCastException: attempting to castjar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.class
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs.core.MediaType.<clinit>(MediaType.java:44)
at com.sun.jersey.core.header.MediaTypes.<clinit>(MediaTypes.java:64)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
at com.sun.jersey.api.client.Client.init(Client.java:342)
at com.sun.jersey.api.client.Client.access$000(Client.java:118)
at com.sun.jersey.api.client.Client$1.f(Client.java:191)
at com.sun.jersey.api.client.Client$1.f(Client.java:187)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.api.client.Client.<init>(Client.java:187)
at com.sun.jersey.api.client.Client.<init>(Client.java:170)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:268)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.hive.ql.hooks.ATSHook.<init>(ATSHook.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1309)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1293)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1347)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:473)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:279)
at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:226)
at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:225)
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:268)
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:473)
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:463)
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:605)
at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at com.baidu.spark.forhivetest.ForTest$.main(ForTest.scala:12)
at com.baidu.spark.forhivetest.ForTest.main(ForTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
16/03/22 17:04:32 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.LinkageError: ClassCastException: attempting to castjar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/mnt/hadoop/yarn/local/filecache/18/spark-assembly-1.6.0-hadoop2.6.0.jar!/javax/ws/rs/ext/RuntimeDelegate.class)
This has to do with your class path. Try not to build a fat jar.
这篇关于在yarn上使用spark sql hivesql时,获取java.lang.LinkageError:ClassCastException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!