Cloudera快速入门VM CDH5.8.0中的Hbase Scala连接问题 [英] Hbase Scala connectivity issue in Cloudera Quick start VM CDH5.8.0

查看:569
本文介绍了Cloudera快速入门VM CDH5.8.0中的Hbase Scala连接问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

  17/03/28 11:40我试图从Scala代码连接HBase,但出现错误。 :53 INFO client.RpcRetryingCaller:调用异常,尝试= 30,重试= 35,开始= 450502毫秒前,取消= false,msg = 
17/03/28 11:41:13 INFO client.RpcRetryingCaller:Call例外,try = 31,retries = 35,开始= 470659 ms前,取消= false,msg =
17/03/28 11:41:33 INFO client.RpcRetryingCaller:调用异常,try = 32,retries = 35,开始= 490824毫秒前,取消=错误,msg =
17/03/28 11:41:53 INFO client.RpcRetryingCaller:调用异常,try = 33,retries = 35,开始= 510834 ms前,取消= false,msg =
17/03/28 11:42:13 INFO client.RpcRetryingCaller:调用异常,try = 34,retries = 35,开始= 530956 ms前,取消= false,msg =
[error](run-main-0)org.apache.hadoop.hbase.client.RetriesExhaustedException:尝试失败后失败= 35,异常:
[错误] Tue Mar 28 28:33:22 PDT 2017, RpcRetryingCaller {globalStartTime = 1490726 002560,暂停= 100,重试= 35},org.apache.hadoop.hbase.MasterNotRunningException:com.google.protobuf.ServiceException:java.lang.NoClassDefFoundError:org / apache / hadoop / net / SocketInputWrapper
[错误]星期二年03月28 11时33分23秒PDT 2017,RpcRetryingCaller {globalStartTime = 1490726002560,暂停= 100,重试= 35},org.apache.hadoop.hbase.MasterNotRunningException:com.google.protobuf.ServiceException:java.lang.NoClassDefFoundError :org / apache / hadoop / net / SocketInputWrapper
[error] Tue Mar 28 11:33:23 PDT 2017,RpcRetryingCaller {globalStartTime = 1490726002560,pause = 100,retries = 35},org.apache.hadoop.hbase .MasterNotRunningException:com.google.protobuf.ServiceException:java.lang.NoClassDefFoundError:组织/阿帕奇/ hadoop的/净/ SocketInputWrapper
[错误]星期二年03月28 11时33分24秒PDT 2017,RpcRetryingCaller {globalStartTime = 1490726002560,暂停= 100,重试= 35},org.apache.hadoop.hbase.MasterNotRunningException:com.google.protobuf.ServiceException:java.lang.NoClassDefFoundError:org / apa che / hadoop / net / SocketInputWrapper



。在org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries
(RpcRetryingCaller.java:147)
在org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
at Hi.main(hw.scala:12)
at Hi.main($ hw.scala)在sun.reflect.NativeMethodAccessorImpl.invoke0
(本机方法)
处sun.reflect.DelegatingMethodAccessorImpl sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
。 invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
引起:org.apache.hadoop.hbase.MasterNotRunningException:com.google。 protobuf.ServiceException:java.lang.NoClassDefFoundError:org / apache / hadoop / net / SocketInputWrapper
。在org.apache.hadoop.hbase.client.ConnectionManager $ $ HConnectionImplementation StubMaker.makeStub(ConnectionManager.java:1560)
。在org.apache.hadoop.hbase.client.ConnectionManager $ $ HConnectionImplementation MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
位于org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
位于org.apache.hadoop.hbase。在org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries client.MasterCallable.prepare(MasterCallable.java:38)
(RpcRetryingCaller.java:124)
。在org.apache.hadoop.hbase。 client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
处org.apache.hadoop.hbase org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
。 client.HBaseAdmin.listTables(HBaseAdmin.java:427)
at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
在Hi.main(hw.scala:12)
在Hi.main(hw.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本地方法)
在sun.reflect。 NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:606) com.google.protobuf.ServiceException:
所致java.lang.NoClassDefFoundError:在org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod组织/阿帕奇/ Hadoop的/网/ SocketInputWrapper
(AbstractRpcClient的.java:239)
在org.apache.hadoop.hbase.ipc.AbstractRpcClient $ BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
在org.apache.hadoop.hbase.protobuf.generated.MasterProtos $ MasterService $ BlockingStub.isMasterRunning(MasterProtos.java:58383)
at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ MasterServiceStubMaker.isM asterRunning(ConnectionManager.java:1591)
at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
at org.apache.hadoop.hbase。客户端连接管理器$ HConnectionImplementation $ StubMaker.makeStub(ConnectionManager.java:1551)
at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
at org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
。在org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
在org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
在org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
a吨org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
。在org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
在Hi.main(hw.scala:12)
在Hi.main(hw.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本地方法)
在sun.reflect。 NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:606)
引起的:java.lang.NoClassDefFoundError:组织/阿帕奇/ Hadoop的/网/ SocketInputWrapper
在org.apache.hadoop.hbase.ipc.RpcClientImpl.createConnection(RpcClientImpl.java:138)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.getConnection(RpcClientImpl.java:1316)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1224)
在org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClie nt.java:226)在org.apache.hadoop.hbase.ipc.AbstractRpcClient $ BlockingRpcChannelImplementation.callBlockingMethod
(AbstractRpcClient.java:331)
。在org.apache.hadoop.hbase.protobuf.generated。 MasterProtos $ MasterService $ BlockingStub.isMasterRunning(MasterProtos.java:58383)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1591)
。在组织。 apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ StubMaker.makeStub(ConnectionManager.java:1551 )
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.getKeepAliveMasterService (的ConnectionManager。的java:在org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries在org.apache.hadoop.hbase.client.MasterCallable.prepare 1737)
(MasterCallable.java:38)
(RpcRetryingCaller。 java:124)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin。 Java的:在org.apache.hadoop.hbase.client.HBaseAdmin.listTables 4110)
(HBaseAdmin.java:427)
在org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin。 (hw.scala:12)
at Hi.main(hw.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
。在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
处的java.lang.reflect.Method sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
。 invoke(Method.java:606)
引起:java.lang.ClassNotFoundException:org .apache.hadoop.net.SocketInputWrapper
在java.net.URLClassLoader的$ 1.run(URLClassLoader.java:366)$ B $在java.net.URLClassLoader的$ 1.run(URLClassLoader.java:355)$ B B $ b。在java.security.AccessController.doPrivileged(本机方法)
在java.net.URLClassLoader.findClass(URLClassLoader.java:354)
在java.lang.ClassLoader.loadClass(ClassLoader.java: 425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.createConnection(RpcClientImpl.java:138)
在org.apache.hadoop.hbase.ipc.RpcClientImpl.getConnection(RpcClientImpl.java:1316)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1224)
在org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
在org.apache.hadoop.hbase.ipc.AbstractRpcClient $ BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
在org.apache.hadoop.hbase.pro tobuf.generated.MasterProtos $ MasterService $ BlockingStub.isMasterRunning(MasterProtos.java:58383)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1591)
。在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ StubMaker.makeStub(的ConnectionManager的.java:1551)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation $ MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
在org.apache.hadoop.hbase.client.ConnectionManager $ HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
at org.apache.hadoop.hbase.client .RpcRetryingCaller.callWithRetries(RpcRetryingC )。
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable( HBaseAdmin.java:4110)
位于org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
位于org.apache.hadoop.hbase.client.HBaseAdmin.listTables( HBaseAdmin.java:411)
在Hi $ .main(hw.scala:12)
在Hi.main(hw.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
处的java.lang.reflect sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
。 Method.invoke(Method.java:606)
[trace]堆栈跟踪被抑制:run last compile:运行完整的输出。
17/03/28 07:56:55错误zookeeper.ClientCnxn:事件线程因中断而退出
java.lang.InterruptedException $ b $ java.util.concurrent.locks.AbstractQueuedSynchronizer $ ConditionObject .reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
在java.util.concurrent.locks.AbstractQueuedSynchronizer中$ ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
在java.util.concurrent.LinkedBlockingQueue.take(的LinkedBlockingQueue .java:442)
at org.apache.zookeeper.ClientCnxn $ EventThread.run(ClientCnxn.java:494)
17/03/28 07:56:55 INFO zookeeper.ClientCnxn:EventThread关闭
java.lang.RuntimeException:非零退出代码:1
at scala.sys.package $ .error(package.scala:27)
[trace]堆栈跟踪被抑制:run last compile:运行完整的输出。
[error](compile:run)非零退出代码:1
[error]总时间:544 s,已完成2017年3月28日7:56:56上午

•主机操作系统是带有8 GB RAM和64位拱门的Windows 7。 Intel core i5。

•我正在使用Cloudera Quick start VM CDH 5.8.0。在我的Wndows上。

•虚拟机使用6GB RAM,2个处理器& 64 GB硬盘。

•在Cloudera Manager中运行的服务:

  Hbase 
HDFS
YARN
Zookeeper
键值索引器

•服务在Cloudera中停止经理:

  hive 
Hue
Impala
Oozie
Solar
Spark
Sqoop 1客户
Sqoop 2

•Hbase版本1.2.0 -cdh5.8.0

•我的客户端代码仅用于VM。

•创建了sbt项目。

•我将此url https://hbase.apache.org/book.html#scala 用于与Scala的Hbase连接。 >
•设置 CLASSPATH 。我没有提到链接中提到的CLASSPATH中的/path/to/scala-library.jar。

  $ export CLASSPATH = $ CLASSPATH:/ usr / lib / hadoop / lib / native:/ usr / lib / hbase / lib / native / Linux-amd64-64 

•项目根目录= / home / cloudera / Desktop / play -sbt-project

•我的/ home / cloudera / Desktop / play-sbt-project / build.sbt 看起来像这样。我根据自己的环境更改了相关的库版本。我添加了一些更多的依赖,如hbase-client,hbase-common& hbase-server作为错误故障排除的一部分,但仍未取得成功。

  name:=play-sbt-project
version:=1.0
scalaVersion: =2.10.2
resolvers + =Apache HBaseathttps://repository.apache.org/content/repositories/releases
resolvers + =Thriftathttp:/ /people.apache.org/~rawson/repo/
libraryDependencies ++ = Seq(
org.apache.hadoop%hadoop-core%1.2.1,
org.apache.hbase%hbase%1.2.0,
org.apache.hbase%hbase-client%1.2.0,
org。 apache.hbase%hbase-common%1.2.0,
org.apache.hbase%hbase-server%1.2.0

•我的主要代码为Hbase连接/ home / cloudera / Desktop / play-sbt-project / src / main / scala / pw.scala看起来像这样

  import org.apache.hadoop.hbase.HBaseConfiguration 
import org.apache.hadoop .hbase.client。{ConnectionFactory,HBaseAdmin,HTable,Put,Get}
import org.apache.hadoop.hbase.util.Bytes

对象Hi {
def main( args:Array [String])= {
println(Hi!)
val conf = new HBaseConfiguration()
val connection = ConnectionFactory.createConnection(conf);
val admin = connection.getAdmin();

//列出表格
listlisttables = admin.listTables()
listtables.foreach(println)
}
}

•我的/etc/hbase/conf/hbase-site.xml如下所示:

 <?xml version =1.0encoding =UTF-8?> 

<! - 由Cloudera Manager自动生成 - >
<配置>
<属性>
<名称> hbase.rootdir< /名称>
< value> hdfs://quickstart.cloudera:8020 / hbase< / value>
< / property>
<属性>
<名称> hbase.replication< /名称>
<值> true< /值>
< / property>
<属性>
< name> hbase.client.write.buffer< / name>
<值> 2097152< /值>
< / property>
<属性>
<名称> hbase.client.pause< / name>
<值> 100< /值>
< / property>
<属性>
<名称> hbase.client.retries.number< / name>
<值> 35< /值>
< / property>
<属性>
<名称> hbase.client.scanner.caching< /名称>
<值> 100< /值>
< / property>
<属性>
<名称> hbase.client.keyvalue.maxsize< /名称>
<值> 10485760< /值>
< / property>
<属性>
<名称> hbase.ipc.client.allowsInterrupt< / name>
<值> true< /值>
< / property>
<属性>
<名称> hbase.client.primaryCallTimeout.get< / name>
< value> 10< /值>
< / property>
<属性>
<名称> hbase.client.primaryCallTimeout.multiget< / name>
< value> 10< /值>
< / property>
<属性>
< name> hbase.coprocessor.region.classes< / name>
< value> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint< / value>
< / property>
<属性>
< name> hbase.regionserver.thrift.http< / name>
<值> false< /值>
< / property>
<属性>
<名称> hbase.thrift.support.proxyuser< / name>
<值> false< /值>
< / property>
<属性>
<名称> hbase.rpc.timeout< / name>
<值> 60000< /值>
< / property>
<属性>
<名称> hbase.snapshot.enabled< /名称>
<值> true< /值>
< / property>
<属性>
<名称> hbase.snapshot.master.timeoutMillis< /名称>
<值> 60000< /值>
< / property>
<属性>
<名称> hbase.snapshot.region.timeout< / name>
<值> 60000< /值>
< / property>
<属性>
<名称> hbase.snapshot.master.timeout.millis< / name>
<值> 60000< /值>
< / property>
<属性>
<名称> hbase.security.authentication< / name>
<值>简单< /值>
< / property>
<属性>
<名称> hbase.rpc.protection< /名称>
<值>认证< /值>
< / property>
<属性>
< name> zookeeper.session.timeout< / name>
<值> 60000< /值>
< / property>
<属性>
<名称> zookeeper.znode.parent< / name>
<值> / hbase< /值>
< / property>
<属性>
< name> zookeeper.znode.rootserver< / name>
<值>根区域服务器< /值>
< / property>
<属性>
<名称> hbase.zookeeper.quorum< / name>
<! - < value> quickstart.cloudera< /值> - >
<值> 127.0.0.1< /值>
< / property>
<属性>
< name> hbase.zookeeper.property.clientPort< / name>
<值> 2181< /值>
< / property>
<属性>
<名称> hbase.rest.ssl.enabled< /名称>
<值> false< /值>
< / property>
< / configuration>

我搜索了很多资料来解决这个问题,但没有取得成功。在解决此问题的过程中,我做了以下更改:

•根据我的环境更改了build.sbt文件中的依赖库版本

•添加了更多依赖库hbase -client,hbase-common& hbase-server。

•将hbase.zookeeper.quorum值从quickstart.cloudera更改为hbase-site.xml文件中的127.0.0.1。


请帮我解决这个问题。谢谢。

解决方案

解决了这个问题。需要做以下改动:


  1. 在build.sbt文件中将hadoop-core改为hadoop-common。由于在最新的CDH版本中,'hadoop-core'只支持运行MapReduce 1的代码。

  2. 更改所有依赖版本,按照build.sbt中的cloudera 5.8.0兼容性。更新 build.sbt 看起来像这样:

      name:=play-sbt-project
    version =1.0
    scalaVersion:=2.10.2
    resolvers + =Thriftathttp://people.apache.org/~rawson/repo/
    解析器+ =Cloudera存储库位于https://repository.cloudera.com/artifactory/cloudera-repos/

    libraryDependencies ++ = Seq(
    org。 apache.hadoop%hadoop-common%2.6.0-cdh5.8.0,
    org.apache.hbase%hbase%1.2.0-cdh5.8.0,
    org.apache.hbase%hbase-client%1.2.0-cdh5.8.0,
    org.apache.hbase%hbase-common%1.2.0-cdh5。 8.0,
    org.apache.hbase%hbase-server%1.2.0-cdh5.8.0


  3. HBaseConfiguration()类被删除。而是使用create()方法。我还修改了主代码中的一些逻辑。早些时候,我得到了HBase中的表(因为这给了一些问题,所以我放弃了这一点,但我会在下次尝试这个),因为我的moto是建立Scala到HBase连接,所以现在我试图插入新行到已经存在的HBase表。新代码如下所示:

     包main.scala 

    导入org.apache.hadoop.conf .Configuration
    import org.apache.hadoop.hbase.HBaseConfiguration
    import org.apache.hadoop.hbase.client。{ConnectionFactory,HTable,Put}
    import org.apache.hadoop.hbase .util.Bytes

    对象Hi {

    def main(args:Array [String])= {
    println(Hi!)
    val conf:Configuration = HBaseConfiguration.create()
    val表:HTable = new HTable(conf,emp1)
    val put1:Put = new Put(Bytes.toBytes(row1))
    put1.add(Bytes.toBytes(personal_data),Bytes.toBytes(qual1),Bytes.toBytes(val1))
    table.put(put1)
    println 成功)
    }
    }



I am trying to connect HBase from a Scala code but getting below error.

17/03/28 11:40:53 INFO client.RpcRetryingCaller: Call exception, tries=30, retries=35, started=450502 ms ago, cancelled=false, msg=  
17/03/28 11:41:13 INFO client.RpcRetryingCaller: Call exception, tries=31, retries=35, started=470659 ms ago, cancelled=false, msg=  
17/03/28 11:41:33 INFO client.RpcRetryingCaller: Call exception, tries=32, retries=35, started=490824 ms ago, cancelled=false, msg=  
17/03/28 11:41:53 INFO client.RpcRetryingCaller: Call exception, tries=33, retries=35, started=510834 ms ago, cancelled=false, msg=  
17/03/28 11:42:13 INFO client.RpcRetryingCaller: Call exception, tries=34, retries=35, started=530956 ms ago, cancelled=false, msg=  
[error] (run-main-0) org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:  
[error] Tue Mar 28 11:33:22 PDT 2017, RpcRetryingCaller{globalStartTime=1490726002560, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper  
[error] Tue Mar 28 11:33:23 PDT 2017, RpcRetryingCaller{globalStartTime=1490726002560, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper  
[error] Tue Mar 28 11:33:23 PDT 2017, RpcRetryingCaller{globalStartTime=1490726002560, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper  
[error] Tue Mar 28 11:33:24 PDT 2017, RpcRetryingCaller{globalStartTime=1490726002560, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper  
.  
.  
.  
.  
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
    at Hi$.main(hw.scala:12)
    at Hi.main(hw.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
    at Hi$.main(hw.scala:12)
    at Hi.main(hw.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1591)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1551)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
    at Hi$.main(hw.scala:12)
    at Hi.main(hw.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/net/SocketInputWrapper
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.createConnection(RpcClientImpl.java:138)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.getConnection(RpcClientImpl.java:1316)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1224)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1591)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1551)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
    at Hi$.main(hw.scala:12)
    at Hi.main(hw.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.net.SocketInputWrapper
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.createConnection(RpcClientImpl.java:138)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.getConnection(RpcClientImpl.java:1316)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1224)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58383)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1591)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1551)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1737)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4117)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4110)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:427)
    at org.apache.hadoop.hbase.client.HBaseAdmin.listTables(HBaseAdmin.java:411)
    at Hi$.main(hw.scala:12)
    at Hi.main(hw.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
17/03/28 07:56:55 ERROR zookeeper.ClientCnxn: Event thread exiting due to interruption
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
17/03/28 07:56:55 INFO zookeeper.ClientCnxn: EventThread shut down
java.lang.RuntimeException: Nonzero exit code: 1
    at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 544 s, completed Mar 28, 2017 7:56:56 AM

• Host OS is Windows 7 with 8 GB RAM and 64 bit arch. Intel core i5.
• I am using Cloudera Quick start VM CDH 5.8.0. on my Wndows.
• VM is using 6GB RAM, 2 processors & 64 GB Harddisk.
• Services running in Cloudera Manager:

    Hbase  
    HDFS  
    YARN  
    Zookeeper  
    Key-Value Indexer  

• Services stopped in Cloudera Manager:

    Hive
    Hue
    Impala
    Oozie
    Solar
    Spark
    Sqoop 1 Client
    Sqoop 2

• Hbase version 1.2.0-cdh5.8.0
• My client code is in VM only.
• Created the sbt project.
• I referred this url https://hbase.apache.org/book.html#scala for Hbase connectivity with Scala.
• Setting the CLASSPATH. I did't mention the "/path/to/scala-library.jar" in the CLASSPATH as mentiond in the link.

$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64  

• Project root directory = /home/cloudera/Desktop/play-sbt-project
• My /home/cloudera/Desktop/play-sbt-project/build.sbt looks like this. I changed the dependent library version as per my environment. I added few more dependencies like "hbase-client", "hbase-common" & "hbase-server" as part of error troubleshoot but still didn't got success.

name := "play-sbt-project"
version := "1.0"
scalaVersion := "2.10.2"
resolvers += "Apache HBase" at "https://repository.apache.org/content/repositories/releases"
resolvers += "Thrift" at "http://people.apache.org/~rawson/repo/"
libraryDependencies ++= Seq(
"org.apache.hadoop" % "hadoop-core" % "1.2.1",
"org.apache.hbase" % "hbase" % "1.2.0",
"org.apache.hbase" % "hbase-client" % "1.2.0",
"org.apache.hbase" % "hbase-common" % "1.2.0",
"org.apache.hbase" % "hbase-server" % "1.2.0"
)

• My main code for Hbase connectivity /home/cloudera/Desktop/play-sbt-project/src/main/scala/pw.scala looks like this

import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.{ConnectionFactory,HBaseAdmin,HTable,Put,Get}
import org.apache.hadoop.hbase.util.Bytes

object Hi {
def main(args: Array[String]) = {
println("Hi!")
val conf = new HBaseConfiguration()
val connection = ConnectionFactory.createConnection(conf);
val admin = connection.getAdmin();

// list the tables
val listtables=admin.listTables()
listtables.foreach(println)
}
}

• My /etc/hbase/conf/hbase-site.xml looks like this:

<?xml version="1.0" encoding="UTF-8"?>

<!--Autogenerated by Cloudera Manager-->
<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://quickstart.cloudera:8020/hbase</value>
  </property>
  <property>
    <name>hbase.replication</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.client.write.buffer</name>
    <value>2097152</value>
  </property>
  <property>
    <name>hbase.client.pause</name>
    <value>100</value>
  </property>
  <property>
    <name>hbase.client.retries.number</name>
    <value>35</value>
  </property>
  <property>
    <name>hbase.client.scanner.caching</name>
    <value>100</value>
  </property>
  <property>
    <name>hbase.client.keyvalue.maxsize</name>
    <value>10485760</value>
  </property>
  <property>
    <name>hbase.ipc.client.allowsInterrupt</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.client.primaryCallTimeout.get</name>
    <value>10</value>
  </property>
  <property>
    <name>hbase.client.primaryCallTimeout.multiget</name>
    <value>10</value>
  </property>
  <property>
    <name>hbase.coprocessor.region.classes</name>
    <value>org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
  </property>
  <property>
    <name>hbase.regionserver.thrift.http</name>
    <value>false</value>
  </property>
  <property>
    <name>hbase.thrift.support.proxyuser</name>
    <value>false</value>
  </property>
  <property>
    <name>hbase.rpc.timeout</name>
    <value>60000</value>
  </property>
  <property>
    <name>hbase.snapshot.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.snapshot.master.timeoutMillis</name>
    <value>60000</value>
  </property>
  <property>
    <name>hbase.snapshot.region.timeout</name>
    <value>60000</value>
  </property>
  <property>
    <name>hbase.snapshot.master.timeout.millis</name>
    <value>60000</value>
  </property>
  <property>
    <name>hbase.security.authentication</name>
    <value>simple</value>
  </property>
  <property>
    <name>hbase.rpc.protection</name>
    <value>authentication</value>
  </property>
  <property>
    <name>zookeeper.session.timeout</name>
    <value>60000</value>
  </property>
  <property>
    <name>zookeeper.znode.parent</name>
    <value>/hbase</value>
  </property>
  <property>
    <name>zookeeper.znode.rootserver</name>
    <value>root-region-server</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <!-- <value>quickstart.cloudera</value> -->
    <value>127.0.0.1</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
  <property>
    <name>hbase.rest.ssl.enabled</name>
    <value>false</value>
  </property>
</configuration>

I googled alot to solve this issue but didn't got sucess. In the process of solving this issue I have done below changes:
• Changed the Dependent libraries version in build.sbt file as per my environment
• Added few more dependent libraries "hbase-client", "hbase-common" & "hbase-server".
• Chaned the "hbase.zookeeper.quorum" value from "quickstart.cloudera" to "127.0.0.1" in "hbase-site.xml" file.

Please help me solving this issue. Thank you.

解决方案

Resolved the issue. There are following changes need to be done:

  1. Change "hadoop-core" to "hadoop-common" inside build.sbt file. Since in latest CDH versions 'hadoop-core' is only supported by code running for MapReduce 1.
  2. Change all the dependecy version as per cloudera 5.8.0 compatibility in build.sbt. Updated build.sbt looks like this:

    name := "play-sbt-project"  
    version := "1.0"  
    scalaVersion := "2.10.2"  
    resolvers += "Thrift" at "http://people.apache.org/~rawson/repo/"  
    resolvers += "Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos/"  
    
    libraryDependencies ++= Seq(  
     "org.apache.hadoop" % "hadoop-common" % "2.6.0-cdh5.8.0",  
     "org.apache.hbase" % "hbase" % "1.2.0-cdh5.8.0",  
     "org.apache.hbase" % "hbase-client" % "1.2.0-cdh5.8.0",  
     "org.apache.hbase" % "hbase-common" % "1.2.0-cdh5.8.0",  
     "org.apache.hbase" % "hbase-server" % "1.2.0-cdh5.8.0"  
    )  
    

  3. HBaseConfiguration() class is depricated. Instead use create() method. Also I changed some logic in the main code. Earlier I was getting the tables present in HBase (Since this was giving some issues so I dropped this but I will try this next time), Since my moto is to establish Scala to HBase connectivity so now I am trying to insert new row to the already existing HBase table. New code looks like this:

    package main.scala
    
    import org.apache.hadoop.conf.Configuration  
    import org.apache.hadoop.hbase.HBaseConfiguration  
    import org.apache.hadoop.hbase.client.{ConnectionFactory,HTable,Put}  
    import org.apache.hadoop.hbase.util.Bytes  
    
    object Hi {  
    
     def main(args: Array[String]) = {  
     println("Hi!")  
     val conf:Configuration = HBaseConfiguration.create()  
     val table:HTable = new HTable(conf, "emp1")  
     val put1:Put = new Put(Bytes.toBytes("row1"))  
     put1.add(Bytes.toBytes("personal_data"),Bytes.toBytes("qual1"),Bytes.toBytes("val1"))  
     table.put(put1)  
     println("Success")  
     }  
    }  
    

这篇关于Cloudera快速入门VM CDH5.8.0中的Hbase Scala连接问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆