Hive合并命令在Spark HiveContext中不起作用 [英] Hive Merge command is not working in Spark HiveContext

查看:323
本文介绍了Hive合并命令在Spark HiveContext中不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在1.6.3 spark版本中运行使用Spark HiveContext的hive合并命令,但它失败并显示下面的错误消息。

  2017-09-11 18:30:33 Driver [INFO] ParseDriver  -  Parse已完成
2017-09-11 18:30:34驱动程序[INFO] ParseDriver - 解析命令:MERGE INTO emp_with_orc AS T USING SOURCE_TABLE AS S
ON T.id = S.id
当匹配和(S.operation = 1)然后更新set a = Sa,b = Sb
当匹配和(S.operation = 2 )然后删除
当不匹配,然后插入值(S.id,Sa,Sb)
2017-09-11 18:30:34驱动程序[错误] HiveWriter - 执行合并查询时出错。
org.apache.spark.sql.AnalysisException:无法识别'MERGE''INTO''emp_with_orc'附近的输入;第1行pos 0
at org.apache.spark.sql.hive.HiveQl $ .createPlan(HiveQl.scala:318)
at org.apache.spark.sql.hive.ExtendedHiveQlParser $$ anonfun $ hiveQl $ 1.apply(ExtendedHiveQlParser.scala:41)
at org.apache.spark.sql.hive.ExtendedHiveQlParser $$ anonfun $ hiveQl $ 1.apply(ExtendedHiveQlParser.scala:40)
at scala.util .parsing.combinator.Parsers $ Success.map(Parsers.scala:136)
at scala.util.parsing.combinator.Parsers $ Success.map(Parsers.scala:135)
at scala.util .parsing.combinator.Parsers $ Parser $$ anonfun $ map $ 1.apply(Parsers.scala:242)
at scala.util.parsing.combinator.Parsers $ Parser $$ anonfun $ map $ 1.apply(Parsers。 Scala:242)
at scala.util.parsing.combinator.Parsers $$ anon $ 3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers $ Parser $$ anonfun $ append $ 1 $$ anonfun $ apply $ 2.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers $ Parser $$ anonfun $ append $ 1 $$ anonfun $ apply $ 2.apply(Parsers .scala:254)
at scala.util.parsing.combinator.Parsers $ Failure.append(Parsers.scala:202)
at scala.util.parsing.combinator.Parsers $ Parser $$ anonfun $ append $ 1.apply(Parsers.scala:254 )
at scala.util.parsing.combinator.Parsers $ Parser $$ anonfun $ append $ 1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers $$ anon $ 3 .apply(Parsers.scala:222)

我不确定是否支持ACID事务合并命令或不在Spark的HiveContext中。



有关这方面的任何帮助将不胜感激。 使用 MERGE 操作,您需要通过HIVE JDBC执行它,因为从现在起Spark SQL不支持MERGE。

I am running hive merge command using Spark HiveContext in 1.6.3 spark version, but it is failing with below error.

2017-09-11 18:30:33 Driver [INFO ] ParseDriver - Parse Completed
2017-09-11 18:30:34 Driver [INFO ] ParseDriver - Parsing command: MERGE INTO emp_with_orc AS T USING SOURCE_TABLE AS S 
ON T.id = S.id 
WHEN MATCHED AND (S.operation = 1) THEN UPDATE SET a = S.a,b = S.b 
WHEN MATCHED AND (S.operation = 2) THEN DELETE 
WHEN NOT MATCHED THEN INSERT VALUES (S.id, S.a, S.b)
2017-09-11 18:30:34 Driver [ERROR] HiveWriter - Error while executing the merge query.
org.apache.spark.sql.AnalysisException: cannot recognize input near 'MERGE' 'INTO' 'emp_with_orc'; line 1 pos 0
    at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:318)
    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
    at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)

I am not sure if ACID transaction merge command is supported or not in HiveContext of spark.

Any help on this will be appreciated.

解决方案

To use MERGE operation you will need to execute it through the HIVE JDBC since MERGE is not supported by Spark SQL as of this day.

这篇关于Hive合并命令在Spark HiveContext中不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆