如何配置蜂巢使用星火? [英] How to configure Hive to use Spark?

查看:239
本文介绍了如何配置蜂巢使用星火?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有星火采用蜂巢的一个问题。我已经安装了通过Ambari单节点HDP 2.1(Hadoop的2.4)在我的CentOS 6.5。我试图在星火运行蜂巢,所以我用这个指令:

https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

我已经下载了prebuilt Hadoop的2.4星火-version,这是我的Apache官方星火网站上找到。于是我开始了与大师:

  ./火花级org.apache.spark.deploy.master.Master

然后与工人:

  ./火花级org.apache.spark.deploy.worker.Worker火花://hadoop.hortonworks:7077

然后,我用这样的提示开始蜂巢:

 蜂巢--auxpath /SharedFiles/spark-1.0.1-bin-hadoop2.4/lib/spark-assembly-1.1.0-hadoop2.4.0.jar

然后,根据指示,我不得不改变蜂房的执行引擎与此提示火花:

 设置hive.execution.engine =火花;,

和结果是:

 查询返回非零code:1,病因:'SET hive.execution.engine =火花在验证失败:无效的价值..预计[先生之一, TEZ。

所以,如果我尝试推出一个简单的蜂巢查询,我可以在我的hadoop.hortonwork看到:8088的推出工作是一个马preduce-作业

现在我的问题:如何更改配置单元的执行引擎,使蜂巢使用的火花,而不是马preduce?是否有任何其他方式来改变它呢? (我已经尝试过通过ambari和蜂巢-site.xml中修改)


解决方案

 设置hive.execution.engine =火花;

试试这个命令它将会运行得很好。

I have a problem using Hive on Spark. I have installed a single-node HDP 2.1 (Hadoop 2.4) via Ambari on my CentOS 6.5. I'm trying to run Hive on Spark, so I used this instructions:

https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

I already downloaded the "Prebuilt for Hadoop 2.4"-version of Spark, which i found on the official Apache Spark website. So I started the Master with:

./spark-class org.apache.spark.deploy.master.Master

Then the worker with:

./spark-class org.apache.spark.deploy.worker.Worker spark://hadoop.hortonworks:7077

And then I started Hive with this prompt:

hive –-auxpath /SharedFiles/spark-1.0.1-bin-hadoop2.4/lib/spark-assembly-1.1.0-hadoop2.4.0.jar

Then, according to the instructions, i had to change the execution engine of hive to spark with this prompt:

set hive.execution.engine=spark;,

And the result is:

Query returned non-zero code: 1, cause: 'SET hive.execution.engine=spark' FAILED in validation : Invalid value.. expects one of [mr, tez].

So if I try to launch a simple Hive Query, I can see on my hadoop.hortonwork:8088 that the launched job is a MapReduce-Job.

Now to my question: How can I change the execution engine of Hive so that Hive uses Spark instead of MapReduce? Are there any other ways to change it? (I already tried to change it via ambari and at the hive-site.xml)

解决方案

set hive.execution.engine=spark;

try this command it will run fine.

这篇关于如何配置蜂巢使用星火?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆