关于zookeeper错误的Hbase连接 [英] Hbase connection about zookeeper error
问题描述
环境:Ubuntu 14.04、hadoop-2.2.0、hbase-0.98.7
当我启动 hadoop 和 hbase(单节点模式)时,都成功(我还检查网站 8088 for hadoop,60010 for hbase)
jps4507 次要名称节点5350 HRegionServer4197 名称节点4795 节点管理器3948 QuorumPeerMain5209 HMaster4678 资源管理器5831 日元4310 数据节点
但是当我检查 hbase-hadoop-master-localhost.log 时,我发现了以下信息
2014-10-23 14:16:11,392 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:打开到服务器 localhost/127.0.0.1:2181 的套接字连接.不会尝试使用 SASL 进行身份验证(未知错误)2014-10-23 14:16:11,426 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:建立到 localhost/127.0.0.1:2181 的套接字连接,启动会话
我有很多关于那个未知错误问题的谷歌网站,但我无法解决这个问题......以下是我的hadoop和hbase配置
Hadoop :
salve content : localhost
core-site.xml
<预><代码><配置><财产><name>fs.defaultFS</name><value>hdfs://localhost:8020</value></属性></配置>yarn-site.xml
<预><代码><配置><财产><name>yarn.resourcemanager.resource-tracker.address</name><value>localhost:9001</value><description>host 是资源管理器的主机名,并且端口是节点管理器联系资源管理器的端口.</描述></属性><财产><name>yarn.resourcemanager.scheduler.address</name><value>localhost:9002</value>Hbase:
hbase-env.sh :
<代码>..导出 JAVA_HOME="/usr/lib/jvm/java-7-oracle"..导出 HBASE_MANAGES_ZK=true..
hbase-site.xml
<预><代码><配置><财产><name>hbase.rootdir</name><value>hdfs://localhost:8020/hbase</value></属性><财产><name>hbase.cluster.distributed</name><值>真</值></属性><财产><name>hbase.zookeeper.property.clientPort</name><value>2181</value></属性></配置>区域服务器内容:本地主机
我的/etc/hosts 内容:
127.0.0.1 本地主机#127.0.1.1 本地主机# 以下几行适用于支持 IPv6 的主机::1 ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allrouters
我尝试了很多方法来解决它,但都失败了,请帮助我解决它,我真的很想知道如何解决.
最初,我运行一个 mapreuce 程序,当 map 67% 减少 0% 时,它会打印出一些 INFO,其中一些 INFO 如下:
14/10/23 15:50:41 INFO zookeeper.ZooKeeper:发起客户端连接,connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@ce147214/10/23 15:50:41 INFO zookeeper.ClientCnxn:打开到服务器 localhost/127.0.0.1:2181 的套接字连接.不会尝试使用 SASL 进行身份验证(未知错误)14/10/23 15:50:41 INFO zookeeper.ClientCnxn:建立到 localhost/127.0.0.1:2181 的套接字连接,启动会话14/10/23 15:50:41 INFO zookeeper.ClientCnxn:会话在服务器 localhost/127.0.0.1:2181 上建立完成,sessionid = 0x1493be510380007,协商超时 = 4000014/10/23 15:50:43 信息 mapred.LocalJobRunner: map >种类14/10/23 15:50:46 信息 mapred.LocalJobRunner: map >种类
然后它崩溃了..我认为程序可能处于死锁状态,这就是我想要解决上面的zookeeper问题.
如果需要我在hadoop或hbase或其他中设置的另一个配置文件,请告诉我,我会发布.谢谢!
在 hbase-site.xml 文件中添加以下属性
<name>hbase.zookeeper.quorum</name><值>192.168.56.101</值>#这是我的服务器ip</属性><财产><name>hbase.zookeeper.property.clientPort</name><value>2181</value></属性>
重启./start-hbase.sh
Environment : Ubuntu 14.04 , hadoop-2.2.0 , hbase-0.98.7
when i start hadoop and hbase(single node mode), both all success (I also check the website 8088 for hadoop, 60010 for hbase)
jps
4507 SecondaryNameNode
5350 HRegionServer
4197 NameNode
4795 NodeManager
3948 QuorumPeerMain
5209 HMaster
4678 ResourceManager
5831 Jps
4310 DataNode
but when i check hbase-hadoop-master-localhost.log, i found a information following
2014-10-23 14:16:11,392 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-10-23 14:16:11,426 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
i have google lot of website for that unknown error problem, but i can't solve this problem... Following is my hadoop and hbase configuration
Hadoop :
salves content : localhost
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>localhost:9001</value>
<description>host is the hostname of the resource manager and
port is the port on which the NodeManagers contact the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>localhost:9002</value>
<description>host is the hostname of the resourcemanager and port is the port
on which the Applications in the cluster talk to the Resource Manager.
</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>In case you do not want to use the default scheduler</description>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>localhost:9003</value>
<description>the host is the hostname of the ResourceManager and the port is the port on
which the clients can talk to the Resource Manager. </description>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value></value>
<description>the local directories used by the nodemanager</description>
</property>
<property>
<name>yarn.nodemanager.address</name>
<value>localhost:9004</value>
<description>the nodemanagers bind to this port</description>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>10240</value>
<description>the amount of memory on the NodeManager in GB</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/app-logs</value>
<description>directory on hdfs where the application logs are moved to </description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value></value>
<description>the directories used by Nodemanagers as log directories</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run </description>
</property>
</configuration>
Hbase:
hbase-env.sh :
..
export JAVA_HOME="/usr/lib/jvm/java-7-oracle"
..
export HBASE_MANAGES_ZK=true
..
hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
regionserver content : localhost
my /etc/hosts content:
127.0.0.1 localhost
#127.0.1.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
I try lots of methods to solve it, but all fail, please help me to solve it, i really need to know how to solve.
Originally, i run a mapreuce program and when map 67% reduce 0%, it print out some INFO and some of INFO is following:
14/10/23 15:50:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.hadoop.hbase.client.HConnectionManager$ClientZKWatcher@ce1472
14/10/23 15:50:41 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
14/10/23 15:50:41 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
14/10/23 15:50:41 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1493be510380007, negotiated timeout = 40000
14/10/23 15:50:43 INFO mapred.LocalJobRunner: map > sort
14/10/23 15:50:46 INFO mapred.LocalJobRunner: map > sort
then it crash.. I think program maybe in dead lock and that is what i want to solve zookeeper problem above.
If want another configuration file i set in hadoop or hbase or others, just tell me, i'll post up. thanks!
Add follwing properties in hbase-site.xml file
<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.56.101</value> #this is my server ip
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
restart ./start-hbase.sh
这篇关于关于zookeeper错误的Hbase连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!