Hive Warehouse Connector + Spark =签名者信息与同一软件包中其他类的签名者信息不匹配 [英] Hive Warehouse Connector + Spark = signer information does not match signer information of other classes in the same package

查看:274
本文介绍了Hive Warehouse Connector + Spark =签名者信息与同一软件包中其他类的签名者信息不匹配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在hdp 3.1上使用hive warehouse connectorspark,即使使用最简单的示例(如下),也要获取异常. 导致问题的类:JaninoRuntimeException-在org.codehaus.janino:janino:jar:3.0.8(spark_sql的依赖性)和com.hortonworks.hive:hive-warehouse-connector_2.11:jar中.

I'm trying to use hive warehouse connector and spark on hdp 3.1 and getting exception even with simplest example (below). The class causing problems: JaninoRuntimeException - is in org.codehaus.janino:janino:jar:3.0.8 (dependency of spark_sql) and in com.hortonworks.hive:hive-warehouse-connector_2.11:jar.

我试图从spark_sql中排除janino库,但这导致janino中缺少其他类.我需要使用hwc才能获得新功能.

I've tried to exclude janino library from spark_sql, but this resulted in missing other classes from janino. And I need hwc to for the new functionality.

有人有同样的错误吗?有什么想法如何处理吗?

Anyone had same error? Any ideas how to deal with it?

我遇到错误:

Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:66)
Caused by: java.lang.SecurityException: class "org.codehaus.janino.JaninoRuntimeException"'s signer information does not match signer information of other classes in the same package
    at java.lang.ClassLoader.checkCerts(ClassLoader.java:898)
    at java.lang.ClassLoader.preDefineClass(ClassLoader.java:668)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:761)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:197)
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateSafeProjection$.create(GenerateSafeProjection.scala:36)
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:1321)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3277)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
    at Main$.main(Main.scala:15)
    at Main.main(Main.scala)
    ... 5 more

我的sbt文件:

name := "testHwc"

version := "0.1"

scalaVersion := "2.11.11"

resolvers += "Hortonworks repo" at "http://repo.hortonworks.com/content/repositories/releases/"

libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "3.1.1.3.1.0.0-78"

// https://mvnrepository.com/artifact/com.hortonworks.hive/hive-warehouse-connector
libraryDependencies += "com.hortonworks.hive" %% "hive-warehouse-connector" % "1.0.0.3.1.0.0-78"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.2.3.1.0.0-78"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.2.3.1.0.0-78"

以及源代码:

import com.hortonworks.hwc.HiveWarehouseSession
import org.apache.spark.sql.SparkSession

object Main {
  def main(args: Array[String]): Unit = {

    val ss = SparkSession.builder()
      .config("spark.sql.hive.hiveserver2.jdbc.url", "nnn")
      .master("local[*]").getOrCreate()

    import ss.sqlContext.implicits._

    val rdd = ss.sparkContext.makeRDD(Seq(1, 2, 3, 4, 5, 6, 7))

    rdd.toDF("col1").show()
    val hive = HiveWarehouseSession.session(ss).build()
  }
}

推荐答案

经过一番调查,我发现错误的存在取决于classpath中库的顺序.

After some investigation I've discovered that the presence of error depends on the order of libraries in classpath.

出于未知原因,当我在IntelliJ IDEA中运行此项目时,类路径始终是随机顺序的,并且应用程序会随机失败和成功.

For unknown reason when I was running this project in IntelliJ IDEA the classpath was always with random order and the app was failing and succeeding randomly.

最后-classpath中的HiveWarehouseConnector jar应该在之后 Spark Sql jar中.

In the end - HiveWarehouseConnector jar in classpath should be after Spark Sql jar.

更新

建议的那样-可以在IntelliJ IDEA中更改顺序依赖项标签.

As suggested in this answer - the order inside IntelliJ IDEA can be changed in dependencies tab.

否则-我无法解决IntelliJ的问题-顺序总是随机的,但是当我在IntelliJ之外执行程序时-我设置了所需的顺序.

Otherwise - I was not able to solve this issue for IntelliJ - the order was always random, but when i executed program outside of IntelliJ - I set the order I needed.

这篇关于Hive Warehouse Connector + Spark =签名者信息与同一软件包中其他类的签名者信息不匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆