从spark(2.11)数据框写入配置单元分区表时发生org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions异常 [英] org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions exception when writing a hive partitioned table from spark(2.11) dataframe

查看:172
本文介绍了从spark(2.11)数据框写入配置单元分区表时发生org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这种奇怪的行为,我的用例是通过使用

I have this strange behavior , my use case is to write a Spark dataframe to a hive partitioned table by using

sqlContext.sql("INSERT OVERWRITE TABLE <table> PARTITION (<partition column) SELECT * FROM <temp table from dataframe>") 

奇怪的是,当使用来自主机A的pyspark shell时,这种方式可行,但使用同一个配置表连接到相同群集的同一确切代码在jupyter笔记本中不起作用,它会返回:

the strange thing is this works when using pyspark shell from a host A, but the same exact code ,connected to the same cluster,using the same hive table does not work in jupyter notebooks, it returns:

java.lang.NoSuchMethodException: org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions 

异常所以在我看来,pyspark shell启动的主机与jupyter所在的主机之间的一些jar不匹配推出,我的问题是,我怎么能确定哪个版本的相应jar是在pyspark外壳中使用的,在jupyter笔记本中是由代码(我没有访问jupyter服务器)?如果两个pyspark shell和jupyter连接到同一个群集,为什么可以使用两个不同的版本?

exception so is seems to me as some jar mismatch between the host where pyspark shell is launched, and the host where jupyter is launched, my question is , how can i determine which version of the corresponding jar is bein used in pyspark shell, and in jupyter notebook by code(i have no access to the jupyter server) ? and why can 2 distinct versions are being used if both pyspark shell, and jupyter are connecting to the same cluster?

更新:经过一些研究我发现jupyter使用Livy和Livy主机使用hive-exec-2.0.1.jar,我们使用pyspark shell的主机使用hive-exec-1.2.1000.2.5.3.58-3.jar,所以我同时下载了两个jar文件,并将它们反编译,我发现loadDynamicPartitions方法同时存在于两个方法签名(参数)中,在livy版本中布尔型布尔型holdDDLTime参数不存在。

Update :after some researching i found jupyter is using "Livy" and Livy host uses hive-exec-2.0.1.jar, the host where we use pyspark shell uses hive-exec-1.2.1000.2.5.3.58-3.jar, so i downloaded both jars from maven repository and decompiled them, i found that altough loadDynamicPartitions method exists in both, method signature(parameters) differ, in livy version boolean holdDDLTime parameter is missing.

推荐答案

我有类似的问题尝试从cloudera获得maven依赖关系

I had similar problem try get the maven dependencies from cloudera

 <dependencies>
    <!-- Scala and Spark dependencies -->

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.0-cdh5.9.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>1.6.0-cdh5.9.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive_2.10</artifactId>
        <version>1.6.0-cdh5.9.2</version>
    </dependency>
     <!-- https://mvnrepository.com/artifact/org.apache.hive/hive-exec -->
    <dependency>
        <groupId>org.apache.hive</groupId>
        <artifactId>hive-exec</artifactId>
        <version>1.1.0-cdh5.9.2</version>
    </dependency>
    <dependency>
        <groupId>org.scalatest</groupId>
        <artifactId>scalatest_2.10</artifactId>
        <version>3.0.0-SNAP4</version>
    </dependency>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.11</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-mllib_2.10</artifactId>
        <version>1.4.1</version>
    </dependency>
    <dependency>
        <groupId>commons-dbcp</groupId>
        <artifactId>commons-dbcp</artifactId>
        <version>1.2.2</version>
    </dependency>
    <dependency>
        <groupId>com.databricks</groupId>
        <artifactId>spark-csv_2.10</artifactId>
        <version>1.4.0</version>
    </dependency>
    <dependency>
        <groupId>com.databricks</groupId>
        <artifactId>spark-xml_2.10</artifactId>
        <version>0.2.0</version>
    </dependency>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk</artifactId>
        <version>1.0.12</version>
    </dependency>
    <dependency>
        <groupId>com.amazonaws</groupId>
        <artifactId>aws-java-sdk-s3</artifactId>
        <version>1.11.172</version>
    </dependency>
    <dependency>
        <groupId>com.github.scopt</groupId>
        <artifactId>scopt_2.10</artifactId>
        <version>3.2.0</version>
    </dependency>
    <dependency>
        <groupId>javax.mail</groupId>
        <artifactId>mail</artifactId>
        <version>1.4</version>
    </dependency>
</dependencies>
<repositories>
    <repository>
        <id>maven-hadoop</id>
        <name>Hadoop Releases</name>
        <url>https://repository.cloudera.com/content/repositories/releases/</url>
    </repository>
    <repository>
        <id>cloudera-repos</id>
        <name>Cloudera Repos</name>
        <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
</repositories>

这篇关于从spark(2.11)数据框写入配置单元分区表时发生org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆