Hive on Spark:无法创建Spark客户端 [英] Hive on Spark: Failed to create spark client

查看:273
本文介绍了Hive on Spark:无法创建Spark客户端的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图让Spark 2.1.0上的Hive 2.1.1在单个实例上工作。我不确定这是否正确。目前我只有一个实例,因此我无法构建集群。

I'm trying to make Hive 2.1.1 on Spark 2.1.0 work on a single instance. I'm not sure that's the right approach. Currently I only have one instance so I can't build a cluster.

当我在配置单元中运行任何插入查询时,出现错误:

When I run any insert query in hive, I get the error:

hive> insert into mcus (id, name) values (1, 'ARM');
Query ID = server_20170223121333_416506b4-13ba-45a4-a0a2-8417b187e8cc
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

恐怕我没有正确配置,因为我在 hdfs dfs -ls / spark / eventlog 下找不到任何Spark日志。 code>。这是我的hive-site.xml的一部分,它与Spark和Yarn有关:

I'm afraid that I didn't configure correctly since I couldn't find any Spark logs under hdfs dfs -ls /spark/eventlog. Here's part of my hive-site.xml which is related to Spark and Yarn:

<property>
     <name>hive.exec.stagingdir</name>
     <value>/tmp/hive-staging</value>
 </property>

 <property>
     <name>hive.fetch.task.conversion</name>
     <value>more</value>
 </property>

 <property>
     <name>hive.execution.engine</name>
     <value>spark</value>
 </property>

 <property>
     <name>spark.master</name>
     <value>spark://ThinkPad-W550s-Lab:7077</value>
 </property>

 <property>
     <name>spark.eventLog.enabled</name>
     <value>true</value>
 </property>

 <property>
     <name>spark.eventLog.dir</name>
     <value>hdfs://localhost:8020/spark/eventlog</value>
 </property>
 <property>
     <name>spark.executor.memory</name>
     <value>2g</value>
 </property>

 <property>
     <name>spark.serializer</name>
     <value>org.apache.spark.serializer.KryoSerializer</value>
 </property>

 <property>
     <name>spark.home</name>
     <value>/home/server/spark</value>
 </property>

 <property>
     <name>spark.yarn.jar</name>
     <value>hdfs://localhost:8020/spark-jars/*</value>
 </property>



<1>由于我没有配置 fs.default.name 在hadoop中的值,我可以在配置文件中使用 hdfs:// localhost:8020 作为文件系统路径,或者将端口更改为9000当我将8020更改为9000时出现同样的错误)?

1) Since I didn't configure the fs.default.name value in hadoop, could I just use hdfs://localhost:8020 as the file system path in the config file or change the port to 9000 (I get the same error when I change 8020 to 9000)?

2)我从 start-master.sh start-slave.sh spark:// ThinkPad-W550s-Lab:7077 ,是否正确?

2) I start spark by start-master.sh and start-slave.sh spark://ThinkPad-W550s-Lab:7077, is it correct?

3)根据线程,我如何检查 Spark Executor Memory + Overhead 的值,以便设置 yarn.scheduler.maximum-allocation-mb yarn.nodemanager.resource.memory-mb

3) According to this thread, how could I check the value of Spark Executor Memory + Overhead in order to set the values of yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb?

yarn.scheduler.maximum-allocation-mb yarn.nodemanager.resource的值.memory-mb 远远大于 spark.executor.memory

4 )我如何解决创建spark客户端失败错误?
非常感谢!

4) How could I fix the Failed to create spark client error? Thanks a lot!

推荐答案

在我的例子中,设置 spark.yarn.appMasterEnv.JAVA_HOME 属性是一个问题。

In my case, setting the spark.yarn.appMasterEnv.JAVA_HOME property was a problem.

修正...

fix...

  <property>
    <name>spark.executorEnv.JAVA_HOME</name>
    <value>${HADOOP CLUSTER JDK PATH}</value>
    <description>Must be hadoop cluster jdk PATH.</description>
  </property>

  <property>
      <name>spark.yarn.appMasterEnv.JAVA_HOME</name>
      <value>${HADOOP CLUSTER JDK PATH}</value>
      <description>Must be hadoop cluster jdk PATH.</description>
  </property>

这篇关于Hive on Spark:无法创建Spark客户端的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆