无法从配置单元加载hbase表中的数据 [英] unable to load data in hbase table from hive

查看:122
本文介绍了无法从配置单元加载hbase表中的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用hadoop版本2.7.0,hive版本1.1.0,HBase版本hbase-0.98.14-hadoop2。



我已经创建了一个hbase表格hive成功。

hive(Koushik)> CREATE TABLE hive_hbase_emp_test(eid int,ename string,esal double)> STORED BY'org.apache.hadoop.hive.hbase.HBaseStorageHandler'>使用SERDEPROPERTIES> (hbase.columns.mapping=:key,cfstr:enm,cfsal:esl)> TBLPROPERTIES(hbase.table.name=hive_hbase_emp_test);采取的OKTime:0.874 secondshbase(main):004:0>描述'hive_hbase_emp_test'表hive_hbase_emp_test已启用hive_hbase_emp_test列表族描述{名称=> 'cfsal',DATA_BLOCK_ENCODING => 'NONE',BLOOMFILTER => 'ROW',REPLICATION_SCOPE => '0',VERSIONS => '1',COMPRESSION => 'NONE',MIN_VERSIONS => '0',TTL => 'FOREVER',KEEP_DELETED_CELLS => 'FALSE',BLOCKSIZE => '65536',IN_MEMORY => 'false',BLOCKCACHE => 'true'} {NAME => 'cfstr',DATA_BLOCK_ENCODING => 'NONE',BLOOMFILTER => 'ROW',REPLICATION_SCOPE => '0',VERSIONS => '1',COMPRESSION => 'NONE',MIN_VERSIONS => '0',TTL => 'FOREVER',KEEP_DELETED_CELLS => 'FALSE',BLOCKSIZE => '65536',IN_MEMORY => 'false',BLOCKCACHE => 'true'} 2行(s)在3.0650秒

但当我尝试从配置单元加载表时,它失败了。



  hive(Koushik)> INSERT OVERWRITE TABLE hive_hbase_emp_test SELECT empid,empname,empsal FROM hive_employee; Query ID = hduser_20150921110000_249675d5-9da7-49fe-b03e-3a2d813ac898Total jobs = 1Launchching Job 1 out of 1减少的任务数设置为0,因为没有reduce操作员开始Job = job_1442836788507_0011,跟踪URL = http:// localhost:8088 / proxy / application_1442836788507_0011 / Kill Command = / usr / local / hadoop / bin / hadoop job -kill job_1442836788507_0011阶段0的Hadoop作业信息:映射器数量:1; 2015-09-21 11:02:39,429阶段0地图= 0%,减少= 0%2015  -  2015年11月01日:39,041阶段-0地图= 0%,减少= 0%2015- 11月9日至21日:02:45814 0级地图= 100%,减少= 0%结束作业= job_1442836788507_0011与errorsError工作期间,获得的调试信息...检查任务ID:从工作job_1442836788507_0011Task task_1442836788507_0011_m_000000(及以上)最故障(4):-----任务ID:task_1442836788507_0011_m_000000URL:此任务http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1442836788507_0011&tipid=task_1442836788507_0011_m_000000-----Diagnostic消息:错误:JAVA .lang.RuntimeException:在org处的org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)处的org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)上配置对象时出错。 apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)位于org.apache.hadoop.mapred的org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)。 org.apache.hadoop.mapred.YarnChild上的MapTask.run(MapTask.java:343)在javax.security.auth.Subject上的java.security.AccessController.doPrivileged(Native方法)处使用$ 2.run(YarnChild.java:163) .doAs(Subject.java:415)at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)由于: java.lang.reflect.InvocationTargetException在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:601)at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)... 9 moreCaused by:java.lang.RuntimeException:Error in在org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils)上的org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)配置对象.java:78)at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)... 14更多引用方式: java.lang.reflect.InvocationTargetException在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:601)at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)... 17 moreCaused by:java.lang.RuntimeException:Map运算符初始化失败,在org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:147)... 22更多出错人:java.lang.NoSuchMethodError:org.apache.hadoop.hive.serde2。 lazy.LazyUtils.getByte(Ljava / lang / String; B)B在org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.collectSeparators(LazySerDeParameters.java:223)at org.apache.had 。oop.hive.serde2.lazy.LazySerDeParameters< INIT>(LazySerDeParameters.java:90)在org.apache.hadoop.hive.hbase.HBaseSerDeParameters<初始化>(HBaseSerDeParameters.java:95)在org.apache。 hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:117)在org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:344)在org.apache.hadoop.hive.ql。在org.apache.hadoop.hive.ql.exec.Operator的org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469)处执行exec.Operator.initialize(Operator.java:385)。初始化孩子(Operator.java:425)在org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65)at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator。 java:385)at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469)at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425)在org.apache.hadoop.h的org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:193) ive.ql.exec.Operator.initialize(Operator.java:385)位于org.apache.hadoop.hive.ql的org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:427)。 exec.Operator.initialize(Operator.java:385)at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:126)... 22 moreFAILED:Execution Error,return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTaskMapReduce作业启动:Stage-Stage-0:Map:1 HDFS读取:0 HDFS写入:0 FAILTotal MapReduce CPU使用时间:0毫秒 



在配置单元中的auxlib文件夹的内容如下



  hduser @ ubuntu:/ usr / lib / hive / auxlib $ lsactivation-1.1.jaraopalliance-1.0.jarapacheds -i18n-2.0.0-M15.jarapacheds-kerberos-codec-2.0。 0-M15.jarapi-ASN1-API-1.0.0-M20.jarapi-util的-1.0.0-M20.jarasm-3.1.jaravro-1.7.4.jaraws-java的SDK-1.7.4.jarazure吸留2.0.0.jarcommons-BeanUtils的-1.7.0.jarcommons-BeanUtils的核心 -  1.8.0.jarcommons-CLI-1.2.jarcommons编译码器1.7.jarcommons的集合 -  3.2.1.jarcommons  - 压缩 -  1.4 .1.jarcommons配置-1.6.jarcommons守护-1.0.13.jarcommons消化器-1.8.jarcommons-EL-1.0.jarcommons-HttpClient的-3.1.jarcommons-IO-2.4.jarcommons琅2.6.jarcommons-lang3 -3.3.2.jarcommons-测井1.1.1.jarcommons-数学2.1.jarcommons-math3-3.1.1.jarcommons净3.1.jarcurator-客户2.7.1.jarcurator框架,2.7.1.jarcurator - 食谱 -  2.7.1.jarfindbugs的注解 -  1.3.9-1.jargmbal-API只-3.0.0-b023.jargrizzly框架-2.1.2.jargrizzly-HTTP-2.1.2.jargrizzly-HTTP服务器-2.1.2.jargrizzly-HTTP-servlet的2.1.2.jargrizzly-RCM-2.1.2.jargson-2.2.4.jarguava-12.0.1.jarguice-3.0.jarguice-servlet的3.0.jarhadoop的注解-2.7 .0.jarhadoop-ANT-2.7.0.jarhadoop的存档-2.7.0.jarhadoop认证 -  2.7.0.jarhadoop-AWS-2.7.0.jarhadoop-蔚2.7.0.jarhadoop客户端-2.2.0 .jarhadoop-共2.2.0.jarhadoop-datajoin-2.7.0.jarhadoop-DistCp使用,2.7.0.j arhadoop-额外-2.7.0.jarhadoop-gridmix-2.7.0.jarhadoop-HDFS-2.7.0.jarhadoop-HDFS-2.7.0-tests.jarhadoop-HDFS-NFS-2.7.0.jarhadoop-MapReduce的客户端 - APP-2.7.0.jarhadoop-MapReduce的客户端 - 共2.7.0.jarhadoop-MapReduce的客户端 - 芯2.7.0.jarhadoop-MapReduce的客户端 -  HS-2.7.0.jarhadoop-MapReduce的客户端 -  HS-插件-2.7.0.jarhadoop-MapReduce的客户jobclient-2.7.0.jarhadoop-MapReduce的客户jobclient-2.7.0-tests.jarhadoop-MapReduce的客户洗牌2.7.0.jarhadoop-MapReduce的examples- 2.7.0.jarhadoop-OpenStack的-2.7.0.jarhadoop-瘤胃2.7.0.jarhadoop-SLS-2.7.0.jarhadoop  - 流 -  2.7.0.jarhadoop纱-API 2.7.0.jarhadoop-yarn-应用-distributedshell-2.7.0.jarhadoop纱的应用程序 - 非托管上午发射-2.7.0.jarhadoop纱,客户2.7.0.jarhadoop纱,共2.7.0.jarhadoop纱,登记处送交2.7.0.jarhadoop纱 - 服务器 -  applicationhistoryservice-2.7.0.jarhadoop纱 - 服务器 - 共2.7.0.jarhadoop纱-服务器节点管理器-2.7.0.jarhadoop纱-服务器的ResourceManager-2.7。 0.jarhadoop纱服务器,sharedcache经理-2.7.0.jarhadoop纱服务器的测试,2.7.0.jarhadoop纱 - 服务器 - 网络代理2.7.0.jarhamcrest核心 -  1.3.jarhbase的注解-0.98.14-hadoop2.jarhbase- CheckStyle的-0.98.14-hadoop2.jarhbase-客户0.98.14-hadoop2.jarhbase-共0.98.14-hadoop2.jarhbase-共0.98.14-hadoop2-tests.jarhbase-例子-0.98.14-hadoop2。 jarhbase-hadoop2-COMPAT-0.98.14-hadoop2.jarhbase-Hadoop的COMPAT-0.98.14-hadoop2.jarhbase-IT-0.98.14-hadoop2.jarhbase-IT-0.98.14-hadoop2-tests.jarhbase-prefix-树0.98.14-hadoop2.jarhbase协议-0.98.14-hadoop2.jarhbase资源束-0.98.14-hadoop2.jarhbase休息-0.98.14-hadoop2.jarhbase服务器-0.98.14-hadoop2。 jarhbase服务器-0.98.14-hadoop2-tests.jarhbase壳0.98.14-hadoop2.jarhbase测试-UTIL-0.98.14-hadoop2.jarhbase-节俭0.98.14-hadoop2.jarhigh规模-lib- 1.1.1.jarhive-HBase的处理程序,1.2.1.jarhive-SERDE-1.2.1.jarhtrace核心 -  2.04.jarhtrace核-3.1.0-incubating.jarhttpclient-4.1.3.jarhttpclient-4.2.5。 jarhttpcore-4.1.3.jarhttpcore-4.2.5.jarjackson的注解,2.2.3.j arjackson核心 -  2.2.3.jarjackson核-ASL-1.8.8.jarjackson核-ASL-1.9.13.jarjackson-数据绑定-2.2.3.jarjackson-JAXRS-1.8.8.jarjackson-JAXRS-1.9。 13.jarjackson映射器-ASL-1.8.8.jarjackson映射器-ASL-1.9.13.jarjackson-XC-1.9.13.jarjamon-运行时2.3.1.jarjasper编译-5.5.23.jarjasper-runtime- 5.5.23.jarjavax.inject-1.jarjava-xmlbuilder-0.4.jarjavax.servlet-3.1.jarjavax.servlet-API 3.0.1.jarjaxb-API 2.2.2.jarjaxb-implement执行-2.2.3-1。 jarjcodings-1.0.8.jarjersey-客户1.8.jarjersey核心 -  1.8.jarjersey核心 -  1.9.jarjersey-grizzly2-1.9.jarjersey  - 吉斯 -  1.9.jarjersey-JSON-1.9.jarjersey服务器,1.9.jarjersey-测试的框架芯-1.9.jarjersey测试框架-grizzly2-1.9.jarjets3t-0.9.0.jarjettison-1.1.jarjettison-1.3.1.jarjetty-6.1.26.jarjetty-的SSLEngine-6.1.26.jarjetty- UTIL-6.1.26.jarjoda-时间2.7.jarjoni-2.1.2.jarjruby完成-1.6.8.jarjsch-0.1.42.jarjsp-2.1-6.1.14.jarjsp-API 2.1-6.1.14。 jarjsp-API 2.1.jarjsr305-3.0.0.jarjunit-4.11.jarleveldbjni-全1.8.jarlibthrift-0.9.0.jarlog4j-1.2.17.jarmanagement-AP I-3.0.0-b012.jarmetrics核心 -  3.0.1.jarmockito  - 全1.8.5.jarnetty  -  3.6.6.Final.jarparanamer  -  2.3.jarprotobuf-java的2.5.0.jarservlet-API-2.5- 6.1.14.jarservlet-API-2.5.jarslf4j-API-1.6.4.jarslf4j-log4j12-1.6.4.jar1snappy-java的1.0.4.1.jarstax-API-1.0-2.jarxmlenc-0.52.jarxz-1.0。 jarzookeeper-3.4.6.jar  



这里缺少什么? ?

解决方案

其实我犯了一个错误。我保留了 hive-hbase-handler-1.2.1.jar & hive-serde-1.2.1.jar 在auxlib路径中,导致问题。当我删除了1.2.1版本的jar文件,然后使用 hive-hbase-handler-1.1.0.jar & 蜂房SERDE-1.1.0.jar 。所以这个问题只能在hive 1.1.0版本中解决(使用habse 0.98.14和hadoop 2.7.0版本)。

I am using hadoop version 2.7.0, hive version 1.1.0, HBase version hbase-0.98.14-hadoop2.

I have created a hbase table from hive successfully.

hive (Koushik)> CREATE TABLE hive_hbase_emp_test(eid int, ename string, esal double) 
              > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
              > WITH SERDEPROPERTIES 
              > ("hbase.columns.mapping" = ":key,cfstr:enm,cfsal:esl")
              > TBLPROPERTIES ("hbase.table.name" = "hive_hbase_emp_test");
OK
Time taken: 0.874 seconds

hbase(main):004:0> describe 'hive_hbase_emp_test'
Table hive_hbase_emp_test is ENABLED                                                                                                            
hive_hbase_emp_test                                                                                                                             
COLUMN FAMILIES DESCRIPTION                                                                                                                     
{NAME => 'cfsal', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VER
SIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}                
{NAME => 'cfstr', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VER
SIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}                
2 row(s) in 3.0650 seconds

But when I am trying to load the table from hive it is failing.

hive (Koushik)> INSERT OVERWRITE TABLE hive_hbase_emp_test SELECT empid,empname,empsal FROM hive_employee;
Query ID = hduser_20150921110000_249675d5-9da7-49fe-b03e-3a2d813ac898
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1442836788507_0011, Tracking URL = http://localhost:8088/proxy/application_1442836788507_0011/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1442836788507_0011
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2015-09-21 11:01:39,041 Stage-0 map = 0%,  reduce = 0%
2015-09-21 11:02:39,429 Stage-0 map = 0%,  reduce = 0%
2015-09-21 11:02:45,814 Stage-0 map = 100%,  reduce = 0%
Ended Job = job_1442836788507_0011 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1442836788507_0011_m_000000 (and more) from job job_1442836788507_0011

Task with the most failures(4): 
-----
Task ID:
  task_1442836788507_0011_m_000000

URL:
  http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1442836788507_0011&tipid=task_1442836788507_0011_m_000000
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Error in configuring object
	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
	... 9 more
Caused by: java.lang.RuntimeException: Error in configuring object
	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
	at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
	... 14 more
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
	... 17 more
Caused by: java.lang.RuntimeException: Map operator initialization failed
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:147)
	... 22 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.lazy.LazyUtils.getByte(Ljava/lang/String;B)B
	at org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.collectSeparators(LazySerDeParameters.java:223)
	at org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.<init>(LazySerDeParameters.java:90)
	at org.apache.hadoop.hive.hbase.HBaseSerDeParameters.<init>(HBaseSerDeParameters.java:95)
	at org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:117)
	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:344)
	at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
	at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469)
	at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425)
	at org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65)
	at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
	at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469)
	at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425)
	at org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:193)
	at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
	at org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:427)
	at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385)
	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:126)
	... 22 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

The content of auxlib folder in hive is as below

hduser@ubuntu:/usr/lib/hive/auxlib$ ls
activation-1.1.jar
aopalliance-1.0.jar
apacheds-i18n-2.0.0-M15.jar
apacheds-kerberos-codec-2.0.0-M15.jar
api-asn1-api-1.0.0-M20.jar
api-util-1.0.0-M20.jar
asm-3.1.jar
avro-1.7.4.jar
aws-java-sdk-1.7.4.jar
azure-storage-2.0.0.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.8.0.jar
commons-cli-1.2.jar
commons-codec-1.7.jar
commons-collections-3.2.1.jar
commons-compress-1.4.1.jar
commons-configuration-1.6.jar
commons-daemon-1.0.13.jar
commons-digester-1.8.jar
commons-el-1.0.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-lang-2.6.jar
commons-lang3-3.3.2.jar
commons-logging-1.1.1.jar
commons-math-2.1.jar
commons-math3-3.1.1.jar
commons-net-3.1.jar
curator-client-2.7.1.jar
curator-framework-2.7.1.jar
curator-recipes-2.7.1.jar
findbugs-annotations-1.3.9-1.jar
gmbal-api-only-3.0.0-b023.jar
grizzly-framework-2.1.2.jar
grizzly-http-2.1.2.jar
grizzly-http-server-2.1.2.jar
grizzly-http-servlet-2.1.2.jar
grizzly-rcm-2.1.2.jar
gson-2.2.4.jar
guava-12.0.1.jar
guice-3.0.jar
guice-servlet-3.0.jar
hadoop-annotations-2.7.0.jar
hadoop-ant-2.7.0.jar
hadoop-archives-2.7.0.jar
hadoop-auth-2.7.0.jar
hadoop-aws-2.7.0.jar
hadoop-azure-2.7.0.jar
hadoop-client-2.2.0.jar
hadoop-common-2.2.0.jar
hadoop-datajoin-2.7.0.jar
hadoop-distcp-2.7.0.jar
hadoop-extras-2.7.0.jar
hadoop-gridmix-2.7.0.jar
hadoop-hdfs-2.7.0.jar
hadoop-hdfs-2.7.0-tests.jar
hadoop-hdfs-nfs-2.7.0.jar
hadoop-mapreduce-client-app-2.7.0.jar
hadoop-mapreduce-client-common-2.7.0.jar
hadoop-mapreduce-client-core-2.7.0.jar
hadoop-mapreduce-client-hs-2.7.0.jar
hadoop-mapreduce-client-hs-plugins-2.7.0.jar
hadoop-mapreduce-client-jobclient-2.7.0.jar
hadoop-mapreduce-client-jobclient-2.7.0-tests.jar
hadoop-mapreduce-client-shuffle-2.7.0.jar
hadoop-mapreduce-examples-2.7.0.jar
hadoop-openstack-2.7.0.jar
hadoop-rumen-2.7.0.jar
hadoop-sls-2.7.0.jar
hadoop-streaming-2.7.0.jar
hadoop-yarn-api-2.7.0.jar
hadoop-yarn-applications-distributedshell-2.7.0.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.7.0.jar
hadoop-yarn-client-2.7.0.jar
hadoop-yarn-common-2.7.0.jar
hadoop-yarn-registry-2.7.0.jar
hadoop-yarn-server-applicationhistoryservice-2.7.0.jar
hadoop-yarn-server-common-2.7.0.jar
hadoop-yarn-server-nodemanager-2.7.0.jar
hadoop-yarn-server-resourcemanager-2.7.0.jar
hadoop-yarn-server-sharedcachemanager-2.7.0.jar
hadoop-yarn-server-tests-2.7.0.jar
hadoop-yarn-server-web-proxy-2.7.0.jar
hamcrest-core-1.3.jar
hbase-annotations-0.98.14-hadoop2.jar
hbase-checkstyle-0.98.14-hadoop2.jar
hbase-client-0.98.14-hadoop2.jar
hbase-common-0.98.14-hadoop2.jar
hbase-common-0.98.14-hadoop2-tests.jar
hbase-examples-0.98.14-hadoop2.jar
hbase-hadoop2-compat-0.98.14-hadoop2.jar
hbase-hadoop-compat-0.98.14-hadoop2.jar
hbase-it-0.98.14-hadoop2.jar
hbase-it-0.98.14-hadoop2-tests.jar
hbase-prefix-tree-0.98.14-hadoop2.jar
hbase-protocol-0.98.14-hadoop2.jar
hbase-resource-bundle-0.98.14-hadoop2.jar
hbase-rest-0.98.14-hadoop2.jar
hbase-server-0.98.14-hadoop2.jar
hbase-server-0.98.14-hadoop2-tests.jar
hbase-shell-0.98.14-hadoop2.jar
hbase-testing-util-0.98.14-hadoop2.jar
hbase-thrift-0.98.14-hadoop2.jar
high-scale-lib-1.1.1.jar
hive-hbase-handler-1.2.1.jar
hive-serde-1.2.1.jar
htrace-core-2.04.jar
htrace-core-3.1.0-incubating.jar
httpclient-4.1.3.jar
httpclient-4.2.5.jar
httpcore-4.1.3.jar
httpcore-4.2.5.jar
jackson-annotations-2.2.3.jar
jackson-core-2.2.3.jar
jackson-core-asl-1.8.8.jar
jackson-core-asl-1.9.13.jar
jackson-databind-2.2.3.jar
jackson-jaxrs-1.8.8.jar
jackson-jaxrs-1.9.13.jar
jackson-mapper-asl-1.8.8.jar
jackson-mapper-asl-1.9.13.jar
jackson-xc-1.9.13.jar
jamon-runtime-2.3.1.jar
jasper-compiler-5.5.23.jar
jasper-runtime-5.5.23.jar
javax.inject-1.jar
java-xmlbuilder-0.4.jar
javax.servlet-3.1.jar
javax.servlet-api-3.0.1.jar
jaxb-api-2.2.2.jar
jaxb-impl-2.2.3-1.jar
jcodings-1.0.8.jar
jersey-client-1.8.jar
jersey-core-1.8.jar
jersey-core-1.9.jar
jersey-grizzly2-1.9.jar
jersey-guice-1.9.jar
jersey-json-1.9.jar
jersey-server-1.9.jar
jersey-test-framework-core-1.9.jar
jersey-test-framework-grizzly2-1.9.jar
jets3t-0.9.0.jar
jettison-1.1.jar
jettison-1.3.1.jar
jetty-6.1.26.jar
jetty-sslengine-6.1.26.jar
jetty-util-6.1.26.jar
joda-time-2.7.jar
joni-2.1.2.jar
jruby-complete-1.6.8.jar
jsch-0.1.42.jar
jsp-2.1-6.1.14.jar
jsp-api-2.1-6.1.14.jar
jsp-api-2.1.jar
jsr305-3.0.0.jar
junit-4.11.jar
leveldbjni-all-1.8.jar
libthrift-0.9.0.jar
log4j-1.2.17.jar
management-api-3.0.0-b012.jar
metrics-core-3.0.1.jar
mockito-all-1.8.5.jar
netty-3.6.6.Final.jar
paranamer-2.3.jar
protobuf-java-2.5.0.jar
servlet-api-2.5-6.1.14.jar
servlet-api-2.5.jar
slf4j-api-1.6.4.jar
slf4j-log4j12-1.6.4.jar1
snappy-java-1.0.4.1.jar
stax-api-1.0-2.jar
xmlenc-0.52.jar
xz-1.0.jar
zookeeper-3.4.6.jar

What's I am missing here??

解决方案

Actually I made a mistake. I have kept hive-hbase-handler-1.2.1.jar & hive-serde-1.2.1.jar in the auxlib path, which was causing the problem. When I removed 1.2.1 version of jars and then it is working fine with hive-hbase-handler-1.1.0.jar & hive-serde-1.1.0.jar. So the problem resolved with hive version 1.1.0 only (with habse version 0.98.14 and hadoop version 2.7.0).

这篇关于无法从配置单元加载hbase表中的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆