在Cloudera Hbase中使用Phoenix(从回购安装) [英] Using Phoenix with Cloudera Hbase (installed from repo)

查看:135
本文介绍了在Cloudera Hbase中使用Phoenix(从回购安装)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我可以让Phoenix在单独的 Apache Hbase



上工作(注意,这些都是针对RHEL6.5上的Hbase 1.0.0 )



对于Hbase的Cloudera风味,但是如果没有抛出异常,我永远都不会工作。 (甚至在操作系统上尝试过RHEL7最小化)



Phoenix 4.4对于Hbase 1.0也是一样。

  HBase的(主):001:0>版本
1.0.0-cdh5.4.4,rUnknown,Mon Jul 6 16:59:55 PDT 2015

堆栈跟踪:

  [ec2-user @ ip-172-31-60-109 phoenix-4.5.0 -HBase-1.0-bin] $ bin / sqlline.py localhost:2181:/ hbase 
设置属性:[isolation,TRANSACTION_READ_COMMITTED]
发出:!connect jdbc:phoenix:localhost:2181:/ hbase none none org.apache.phoenix.jdbc.PhoenixDriver
连接到jdbc:phoenix:localhost:2181:/ hbase
15/08/06 03:10:25 WARN util.NativeCodeLoader:无法加载native- hadoop库为您的平台...使用内置的java类在适用
15/08/06 03:10:26 WARN impl.MetricsConfig:无法找到配置:尝试hadoop-metrics2-phoenix.properties,hadoop-metrics2 .properties
15/08/06 03:10:27 WARN ipc.CoprocessorRpcChannel:IOException调用失败
org.apache.hadoop.hbase.DoNotRetryIOException:org.apache.hadoop.hbase.DoNotRetryIOException:SYSTEM .CATALOG:org.apache.hadoop.hbase.client.Scan.setRaw (Z)Lorg /阿帕奇/ hadoop的/ HBase的/客户端/扫描;
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1269)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos $ MetaDataService.callMethod(MetaDataProtos.java:11619)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7054)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos $ ClientService $ 2.callBlockingMethod(ClientProtos.java:31447)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java :2035)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java :130)
在org.apache .hadoop.hbase.ipc.RpcExecutor $ 1.run(RpcExecutor.java:107)$ b $在java.lang.Thread.run(Thread.java:745)
引起:java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg /阿帕奇/ hadoop的/ HBase的/客户端/扫描;
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:966)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1042)
at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1216)
... 10 more

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance (Constructor.java:526)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java :95)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException (ProtobufUtil.java:313)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1609)
at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel $ 1。调用(RegionCoprocessorRpcChannel.java:92)
at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel $ 1.call(RegionCoprocessorRpcChannel.java:89)
at org.apache.hadoop.hbase.client.RpcRetryingCaller .callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel .callMethod(CoprocessorRpcChannel.java:56)
at org.apache.phoenix.coprocessor.generated.MetaDataProtos $ MetaDataService $ Stub.createTable(MetaDataProtos.java:11799)
at org.apache.phoenix.query .ConnectionQueryServicesImpl $ 6.call(ConnectionQueryServicesImpl.java:1273)
at org.apache.phoenix.query.ConnectionQueryServicesImpl $ 6.call(ConnectionQuerySe rvicesImpl.java:1261)
在org.apache.hadoop.hbase.client.HTable $ 16.call(HTable.java:1737)
在java.util.concurrent.FutureTask.run(FutureTask.java :262)
在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)$ b $在java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:615)
在java.lang.Thread.run(Thread.java:745)
引起:org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):org.apache。 hadoop.hbase.DoNotRetryIOException:SYSTEM.CATALOG:org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg / apache / hadoop / hbase / client / Scan;
at ...


解决方案

Cloudera doesn' t正式支持Apache phoenix,但它仍然在cloudera Labs中,因此您无法在cloudera存储库中找到任何Cloudera Pheonix tar.gz文件,您可以在Cloudera存储库中找到Phoenix的唯一位置位于 repository ,但只有通过cloudera manager,最新可用版本的cloudera进行安装,才能使用parcel凤凰是4.3.0。

如果您想在Cloudera Hadoop发行版上执行Phoenix 4.4或4.5版本,则需要使用CDH依赖性jar来重新构建phoenix库。你不能简单地使用apache Phoenix tar.gz



以下是步骤。

最近我发现Andrew Purtell为了让Phoenix与CDH版本兼容做了大量的工作。在下面的链接github页面中也是一样。从下面的github链接下载适当的分支。这节省了您的时间。



https://github.com/chiastic-security/phoenix-for-cloudera/branches




  • 从Apache repository 。 (如果你从上面的github页面下载,跳过这一步)



使用CDH依赖jar重建源代码 - 更新pom.xml和2个源文件如下(我的CDH版本是5.4.2)

  [h4ck3r @ host1 phoenix] $ diff phoenix-4.5 _Updated / phoenix-4.5.0-HBase-1.0-src / pom.xml phoenix-4.5_Orig / phoenix-4.5.0 -HBase-1.0-src / pom.xml 
28c28
< <! - < module> phoenix-pig< / module> - >
---
> <模块>凤猪< /模块>
37a38,41
> < id> apache release< / id>
> < URL> HTTPS://repository.apache.org/content/repositories/releases/< / URL>
> < /储存库>
> <库>
42,43c46,50
< < ID> Cloudera的< / ID>
< < URL> HTTPS://repository.cloudera.com/artifactory/cloudera-repos< / URL>
---
> < id> apache快照< / id>
> < URL> HTTPS://repository.apache.org/content/repositories/snapshots/< / URL>
> <快照>
> <启用>真< /启用>
> < /快照>
45d51
<
54d59
<
77,81c82,83
< < hbase.version> 1.0.0-cdh5.4.2< /hbase.version>
< < Hadoop的two.version> 2.6.0-cdh5.4.2< /hadoop-two.version>
/< < hadoop.version> 2.6.0-cdh5.4.2< /hadoop.version>
< < pig.version> 0.12.0< /pig.version>
< < flume.version> 1.5.0-cdh5.4.2< /flume.version>
---
> < hbase.version> 1.0.1< /hbase.version>
> < Hadoop的two.version> 2.5.1< /hadoop-two.version>
84a87,88
> < hadoop.version> 2.5.1< /hadoop.version>
> < pig.version> 0.13.0< /pig.version>
97a102
> < flume.version> 1.4.0< /flume.version>
449,450c454
<
< <依赖性>
---
> <依赖性>
454c458
< < /依赖性>
---
> < /依赖性>
$ b $ [h4ck3r @ host1 phoenix] $ diff phoenix-4.5_Updated / phoenix-4.5.0-HBase-1.0-src / phoenix-core / src / main / java / org / apache / hadoop / hbase /regionserver/LocalIndexMerger.java phoenix-4.5_Orig / phoenix-4.5.0-HBase-1.0-src / phoenix-core / src / main / java / org / apache / hadoop / hbase / regionserver / LocalIndexMerger.java
84c84
< rss.getServerName(),metaEntries,1);
---
> rss.getServerName(),metaEntries);
$ b $ [h4ck3r @ host1 phoenix] $ diff phoenix-4.5_Updated / phoenix-4.5.0-HBase-1.0-src / phoenix-core / src / main / java / org / apache / hadoop / hbase /regionserver/IndexSplitTransaction.java phoenix-4.5_Orig / phoenix-4.5.0-HBase-1.0-src / phoenix-core / src / main / java / org / apache / hadoop / hbase / regionserver / IndexSplitTransaction.java
291c291
< childRegions.getSecond()。getRegionInfo(),server.getServerName(),1);
---
> daughterRegions.getSecond()。getRegionInfo(),server.getServerName());
978c978
< }
---
> }
\在文件结尾没有新行




  • 以上build将在每个子组件的目标目录下创建新的Jars。

  • 从Apache下载Apache phoenix 4.5二进制文件 repository 解压phoenix-4.5.0-HBase-1.0 -bin.tar.gz并用新的Jars替换下面的Phoenix Jars




    • phoenix-4.5.0-HBase-1.0-client.jar

    • phoenix-4.5.0-HBase-1.0-server-without-antlr.jar

    • phoenix-4.5.0-HBase-1.0-client -minimal.jar
    • phoenix-assembly-4.5.0-HBase-1.0-tests.jar

    • phoenix-4.5.0-HBase-1.0 -client-without-hbase.jar

    • phoenix-core-4.5.0-HBase-1.0.jar

    • phoenix-4.5.0-HBase -1.0-server.jar


  • 替换 phoenix-4.5.0-HBase-1.0-server .jar phoenix-core-4.5.0-HBase-1.0.jar 在hbase lib位置并重新启动hbase。 (在4.7版中只有phoenix-4.7.0-cdh5.X.1-server.jar被复制到hbase lib)

  • 从新更新的目录中执行phoenix命令。



由于一些依赖性问题,phoenix-pig没有处理,这只是一个解决方法。

I can get Phoenix working on a standalone Apache Hbase

(note, all this is for Hbase 1.0.0 on RHEL6.5)

For the Cloudera flavour of Hbase however I never get it working without it throwing Exceptions. (even tried RHEL7 minimal as en OS)

The same thing happens with Phoenix 4.4 for Hbase 1.0.

hbase(main):001:0> version
1.0.0-cdh5.4.4, rUnknown, Mon Jul  6 16:59:55 PDT 2015

stack trace:

    [ec2-user@ip-172-31-60-109 phoenix-4.5.0-HBase-1.0-bin]$ bin/sqlline.py localhost:2181:/hbase
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost:2181:/hbase none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost:2181:/hbase
15/08/06 03:10:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/08/06 03:10:26 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
15/08/06 03:10:27 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1269)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619)
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7054)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:966)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1042)
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1216)
    ... 10 more

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:313)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1609)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
    at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.createTable(MetaDataProtos.java:11799)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$6.call(ConnectionQueryServicesImpl.java:1273)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$6.call(ConnectionQueryServicesImpl.java:1261)
    at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1737)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
    at ... 

解决方案

Cloudera doesn't officially support Apache phoenix, it's still in cloudera Labs, so you cannot find any Cloudera Pheonix tar.gz files in cloudera repository, The only place where you can find Phoenix in Cloudera repository is in the parcel repository, However parcel can be used only if you install through cloudera manager, latest available version of cloudera Phoenix is 4.3.0.

If you wanted to execute Phoenix 4.4 or 4.5 version on Cloudera Hadoop distribution, you need to re-build phoenix libraries using CDH dependency jars. You cannot simply use apache Phoenix tar.gz

Here is the steps.

Recently I found that Andrew Purtell has done a tremendous work to make Phoenix compatible with CDH version. The same is available in the below link github page. Download the appropriate branch from the below github link. This saves your time.

https://github.com/chiastic-security/phoenix-for-cloudera/branches

  • Download Apache phoenix 4.5 source from Apache repository. (Skip this step if you are downloading from the above github page)

Rebuild the source code using CDH dependency jars - Update pom.xml and 2 source files as follows(My CDH version is 5.4.2)

[h4ck3r@host1 phoenix]$ diff phoenix-4.5_Updated/phoenix-4.5.0-HBase-1.0-src/pom.xml  phoenix-4.5_Orig/phoenix-4.5.0-HBase-1.0-src/pom.xml
28c28
< <!--    <module>phoenix-pig</module> -->
---
>     <module>phoenix-pig</module>
37a38,41
>       <id>apache release</id>
>       <url>https://repository.apache.org/content/repositories/releases/</url>
>     </repository>
>     <repository>
42,43c46,50
<       <id>cloudera</id>
<       <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
---
>       <id>apache snapshot</id>
>       <url>https://repository.apache.org/content/repositories/snapshots/</url>
>       <snapshots>
>         <enabled>true</enabled>
>       </snapshots>
45d51
<
54d59
<
77,81c82,83
<     <hbase.version>1.0.0-cdh5.4.2</hbase.version>
<     <hadoop-two.version>2.6.0-cdh5.4.2</hadoop-two.version>
/<     <hadoop.version>2.6.0-cdh5.4.2</hadoop.version>
<     <pig.version>0.12.0</pig.version>
<     <flume.version>1.5.0-cdh5.4.2</flume.version>
---
>     <hbase.version>1.0.1</hbase.version>
>     <hadoop-two.version>2.5.1</hadoop-two.version>
84a87,88
>     <hadoop.version>2.5.1</hadoop.version>
>     <pig.version>0.13.0</pig.version>
97a102
>     <flume.version>1.4.0</flume.version>
449,450c454
<
<   <dependency>
---
>       <dependency>
454c458
<       </dependency>
---
>       </dependency>

[h4ck3r@host1 phoenix]$ diff phoenix-4.5_Updated/phoenix-4.5.0-HBase-1.0-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java  phoenix-4.5_Orig/phoenix-4.5.0-HBase-1.0-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java
84c84
<                     rss.getServerName(), metaEntries,1);
---
>                     rss.getServerName(), metaEntries);

[h4ck3r@host1 phoenix]$ diff phoenix-4.5_Updated/phoenix-4.5.0-HBase-1.0-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexSplitTransaction.java phoenix-4.5_Orig/phoenix-4.5.0-HBase-1.0-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexSplitTransaction.java
291c291
<                 daughterRegions.getSecond().getRegionInfo(), server.getServerName(),1);
---
>                 daughterRegions.getSecond().getRegionInfo(), server.getServerName());
978c978
< }
---
> }
\ No newline at end of file

  • Above build will create new Jars under target directory of each sub component.
  • Download Apache phoenix 4.5 binary from Apache repository
  • Extract phoenix-4.5.0-HBase-1.0-bin.tar.gz and replace the below Phoenix Jars with new Jars

    • phoenix-4.5.0-HBase-1.0-client.jar
    • phoenix-4.5.0-HBase-1.0-server-without-antlr.jar
    • phoenix-4.5.0-HBase-1.0-client-minimal.jar
    • phoenix-assembly-4.5.0-HBase-1.0-tests.jar
    • phoenix-4.5.0-HBase-1.0-client-without-hbase.jar
    • phoenix-core-4.5.0-HBase-1.0.jar
    • phoenix-4.5.0-HBase-1.0-server.jar
  • Replace phoenix-4.5.0-HBase-1.0-server.jar and phoenix-core-4.5.0-HBase-1.0.jar in hbase lib location and restart hbase. (In 4.7 only phoenix-4.7.0-cdh5.X.1-server.jar to be copied to hbase lib)

  • Execute phoenix command from the new updated directory.

Due to some dependency issues phoenix-pig is not handled, this is just a workaround.

这篇关于在Cloudera Hbase中使用Phoenix(从回购安装)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆