关于zookeeper错误的Hbase连接 [英] Hbase connection about zookeeper error

查看:155
本文介绍了关于zookeeper错误的Hbase连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

环境:Ubuntu 14.04,hadoop-2.2.0,hbase-0.98.7

当我启动hadoop和hbase(单节点模式)时,都成功了我也检查网站8088为hadoop,60010为hbase)

  jps 
4507 SecondaryNameNode
5350 HRegionServer
4197 NameNode
4795 NodeManager
3948 QuorumPeerMain
5209 HMaster
4678 ResourceManager
5831 Jps
4310 DataNode

但是当我检查hbase-hadoop-master-localhost.log时,我发现一个信息在下面:

  2014-10-23 14:16:11,392信息[main-SendThread(localhost:2181)] zookeeper.ClientCnxn:打开到服务器localhost / 127.0.0.1的套接字连接: 2181。不会尝试使用SASL进行身份验证(未知错误)
2014-10-23 14:16:11,426 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:建立到localhost / 127.0.0.1的套接字连接: 2181,启动会话

我有很多关于该未知错误问题的网站,但是我无法解决这个问题...
以下是我的hadoop和hbase配置

Hadoop:

content:localhost

core-site.xml

 < configuration> ; 
<属性>
<名称> fs.defaultFS< / name>
<值> hdfs:// localhost:8020< /值>
< / property>
< / configuration>

yarn-site.xml

 <结构> 
<属性>
< name> yarn.resourcemanager.resource-tracker.address< / name>
< value> localhost:9001< /值>
< description> host是资源管理器的主机名,
端口是NodeManager与Resource Manager联系的端口。
< / description>
< / property>

<属性>
< name> yarn.resourcemanager.scheduler.address< / name>
< value> localhost:9002< /值>
< description> host是资源管理器的主机名,端口是群集中的应用程序与资源管理器对话的端口

< / description>
< / property>

<属性>
< name> yarn.resourcemanager.scheduler.class< / name>
< value> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler< / value>
< description>如果您不想使用默认调度程序< / description>
< / property>

<属性>
< name> yarn.resourcemanager.address< / name>
< value> localhost:9003< /值>
< description>主机是ResourceManager的主机名,端口是
上的端口,客户端可以与资源管理器对话。 < /描述>
< / property>

<属性>
< name> yarn.nodemanager.local-dirs< / name>
<值>< /值>
< description> nodemanager< / description>使用的本地目录
< / property>

<属性>
< name> yarn.nodemanager.address< / name>
< value> localhost:9004< /值>
< description> nodemanagers绑定到此端口< / description>
< / property>

<属性>
< name> yarn.nodemanager.resource.memory-mb< / name>
<值> 10240< /值>
< description> GB中NodeManager上的内存量< / description>
< / property>

<属性>
< name> yarn.nodemanager.remote-app-log-dir< / name>
<值> / app-logs< /值>应用程序日志移至< / description>的hdfs上的
< description>目录
< / property>

<属性>
< name> yarn.nodemanager.log-dirs< / name>
<值>< /值>
< description> Nodemanagers用作日志目录的目录< / description>
< / property>

<属性>
< name> yarn.nodemanager.aux-services< / name>
< value> mapreduce_shuffle< /值>
< description>需要设置Map Reduce运行的shuffle服务< / description>
< / property>
< / configuration>



hbase:


hbase-env。 sh:


  .. 
export JAVA_HOME =/ usr / lib / jvm / java-7-oracle
..
export HBASE_MANAGES_ZK = true
..



hbase- site.xml


 < configuration> 
<属性>
<名称> hbase.rootdir< /名称>
< value> hdfs:// localhost:8020 / hbase< / value>
< / property>
<属性>
<名称> hbase.cluster.distributed< /名称>
<值> true< /值>
< / property>
<属性>
< name> hbase.zookeeper.property.clientPort< / name>
<值> 2181< /值>
< / property>
< / configuration>

regionserver内容:localhost

my / etc / hosts内容:

  127.0.0.1 localhost 
#127.0.1.1 localhost

#对于支持IPv6的主机,以下几行是可取的:
:: 1 ip6-localhost ip6-loopback
fe00 :: 0 ip6-localnet
ff00 :: 0 ip6-mcastprefix
ff02 :: 1 ip6-allnodes
ff02 :: 2 ip6-allrouters

我尝试了很多方法来解决它,但都失败了,请帮我解决它,我真的需要知道如何解决。

最初,我运行了一个mapreuce程序,当map 67%减少0%时,它会打印出一些INFO,其中一些INFO如下所示:

  14/10/23 15:50:41信息zookeeper.ZooKeeper:启动客户端连接,connectString = localhost:2181 sessionTimeout = 60000 watcher = org.apache。 hadoop.hbase.client.HConnectionManager$ClientZKWatcher@ce1472 
14/10/23 15:50:41信息zookeeper.ClientCnxn:打开与服务器localhost / 127.0.0.1:2181的套接字连接。不会尝试使用SASL进行身份验证(未知错误)
14/10/23 15:50:41 INFO zookeeper.ClientCnxn:建立到localhost / 127.0.0.1的套接字连接:2181,启动会话
14 / 10/23 15:50:41信息zookeeper.ClientCnxn:服务器localhost / 127.0.0.1上的会话建立完成:2181,sessionid = 0x1493be510380007,协商超时= 40000
14/10/23 15:50:43信息mapred .LocalJobRunner:map> sort
14/10/23 15:50:46信息mapred.LocalJobRunner:map>然后它崩溃..我认为程序可能在死锁,这就是我想解决的问题上面的zookeeper问题。

如果需要另一个配置文件,我设置在hadoop或hbase或其他配置文件,只要告诉我,我会张贴。
thanks!

解决方案

在hbase-site.xml文件中添加follwing属性

 <属性> 
<名称> hbase.zookeeper.quorum< / name>
<值> 192.168.56.101< /值> #这是我的服务器ip
< / property>
<属性>
< name> hbase.zookeeper.property.clientPort< / name>
<值> 2181< /值>
< / property>

重启./start-hbase.sh


Environment : Ubuntu 14.04 , hadoop-2.2.0 , hbase-0.98.7

when i start hadoop and hbase(single node mode), both all success (I also check the website 8088 for hadoop, 60010 for hbase)

jps
4507 SecondaryNameNode
5350 HRegionServer
4197 NameNode
4795 NodeManager
3948 QuorumPeerMain
5209 HMaster
4678 ResourceManager
5831 Jps
4310 DataNode

but when i check hbase-hadoop-master-localhost.log, i found a information following

    2014-10-23 14:16:11,392 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-10-23 14:16:11,426 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session

i have google lot of website for that unknown error problem, but i can't solve this problem... Following is my hadoop and hbase configuration

Hadoop :

salves content : localhost

core-site.xml

<configuration>
    <property>
         <name>fs.defaultFS</name>
         <value>hdfs://localhost:8020</value>
     </property>
</configuration>

yarn-site.xml

<configuration>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>localhost:9001</value>
    <description>host is the hostname of the resource manager and 
    port is the port on which the NodeManagers contact the Resource Manager.
    </description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>localhost:9002</value>
    <description>host is the hostname of the resourcemanager and port is the port
    on which the Applications in the cluster talk to the Resource Manager.
    </description>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    <description>In case you do not want to use the default scheduler</description>
  </property>

  <property>
    <name>yarn.resourcemanager.address</name>
    <value>localhost:9003</value>
    <description>the host is the hostname of the ResourceManager and the port is the port on
    which the clients can talk to the Resource Manager. </description>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value></value>
    <description>the local directories used by the nodemanager</description>
  </property>

  <property>
    <name>yarn.nodemanager.address</name>
    <value>localhost:9004</value>
    <description>the nodemanagers bind to this port</description>
  </property>  

  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>10240</value>
    <description>the amount of memory on the NodeManager in GB</description>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/app-logs</value>
    <description>directory on hdfs where the application logs are moved to </description>
  </property>

   <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value></value>
    <description>the directories used by Nodemanagers as log directories</description>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>shuffle service that needs to be set for Map Reduce to run </description>
  </property>
</configuration>

Hbase:

hbase-env.sh :

..
export JAVA_HOME="/usr/lib/jvm/java-7-oracle"
..
export HBASE_MANAGES_ZK=true
..

hbase-site.xml

<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:8020/hbase</value>
    </property>
    <property> 
        <name>hbase.cluster.distributed</name> 
        <value>true</value> 
    </property> 
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value> 
    </property>
</configuration>  

regionserver content : localhost

my /etc/hosts content:

127.0.0.1       localhost
#127.0.1.1      localhost

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

I try lots of methods to solve it, but all fail, please help me to solve it, i really need to know how to solve.

Originally, i run a mapreuce program and when map 67% reduce 0%, it print out some INFO and some of INFO is following:

14/10/23 15:50:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@ce1472
14/10/23 15:50:41 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
14/10/23 15:50:41 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
14/10/23 15:50:41 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1493be510380007, negotiated timeout = 40000
14/10/23 15:50:43 INFO mapred.LocalJobRunner: map > sort
14/10/23 15:50:46 INFO mapred.LocalJobRunner: map > sort

then it crash.. I think program maybe in dead lock and that is what i want to solve zookeeper problem above.

If want another configuration file i set in hadoop or hbase or others, just tell me, i'll post up. thanks!

解决方案

Add follwing properties in hbase-site.xml file

 <property>
 <name>hbase.zookeeper.quorum</name>
 <value>192.168.56.101</value>                       #this is my server ip
 </property>
 <property>
 <name>hbase.zookeeper.property.clientPort</name>
 <value>2181</value>
 </property>

restart ./start-hbase.sh

这篇关于关于zookeeper错误的Hbase连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆