执行 hdfs zkfc 命令时出错 [英] Error executing hdfs zkfc command
问题描述
我是 hadoop、hdfs 的新手.我已经完成了接下来的步骤:
我已经在三个namenodes中启动了zookeeper:
*vagrant@172:~$ zkServer.sh start
我可以看到状态:
*vagrant@172:~$ zkServer.sh 状态
结果状态:
默认启用JMX使用配置:/opt/zookeeper-3.4.6/bin/../conf/zoo.cfg模式:跟随
使用 jps 命令只会出现 jps,有时也会出现 quaroom:
*vagrant@172:~$ jps2237 日元
当我运行下一个命令时.
* vagrant@172:~$ hdfs zkfc -formatZK16/01/07 16:10:09 INFO zookeeper.ClientCnxn:打开与服务器 172.16.8.192/172.16.8.192:2181 的套接字连接.不会尝试使用 SASL 进行身份验证(未知错误)16/01/07 16:10:10 INFO zookeeper.ClientCnxn:建立到 172.16.8.192/172.16.8.192:2181 的套接字连接,发起会话16/01/07 16:10:11 INFO zookeeper.ClientCnxn:在服务器 172.16.8.192/172.16.8.192:2181 上完成会话建立,sessionid = 0x2521cd93c970022,协商超时 = 6000用法:java zkfc [ -formatZK [-force] [-nonInteractive] ]16/01/07 16:10:11 信息 ha.ActiveStandbyElector:会话已连接.16/01/07 16:10:11 INFO zookeeper.ZooKeeper:会话:0x2521cd93c970022 关闭16/01/07 16:10:11 信息 zookeeper.ClientCnxn:EventThread 关闭16/01/07 16:10:12 FATAL tools.DFSZKFailoverController:遇到致命错误,现在退出org.apache.hadoop.HadoopIllegalArgumentException:错误的参数:–formatZK在 org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)在 org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)在 org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)在 org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172)在 org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168)在 org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)在 org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)在 org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
对这个错误的任何帮助对我来说都是一个很大的帮助.
我的配置如下:
bashrc
###JAVA 配置###JAVA_HOME=/usr/lib/jvm/java-8-oracle导出路径=$PATH:$JAVA_HOME/bin###HADOOP 配置###HADOOP_PREFIX=/opt/hadoop-2.7.1/导出路径=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin###动物园管理员###导出路径=$PATH:/opt/zookeeper-3.4.6/bin
hdfs-site.xml
<预><代码><配置><财产><name>dfs.replication</name><值>2</值></属性><财产><name>dfs.name.dir</name><value>file:///hdfs/name</value></属性><财产><name>dfs.data.dir</name><value>file:///hdfs/data</value></属性><财产><name>dfs.permissions</name><value>false</value></属性><财产><name>dfs.nameservices</name><value>auto-ha</value></属性><财产><name>dfs.ha.namenodes.auto-ha</name><value>nn01,nn02</value></属性><财产><name>dfs.namenode.rpc-address.auto-ha.nn01</name><值>172.16.8.191:8020</值></属性><财产><name>dfs.namenode.http-address.auto-ha.nn01</name><值>172.16.8.191:50070</值></属性><财产><name>dfs.namenode.rpc-address.auto-ha.nn02</name><值>172.16.8.192:8020</值></属性><财产><name>dfs.namenode.http-address.auto-ha.nn02</name><值>172.16.8.192:50070</值></属性><财产><name>dfs.namenode.shared.edits.dir</name><value>qjournal://172.16.8.191:8485;172.16.8.192:8485;172.16.8.193:8485/auto-ha</value></属性><财产><name>dfs.journalnode.edits.dir</name><value>/hdfs/journalnode</value></属性><财产><name>dfs.ha.fencing.methods</name><value>sshfence</value></属性><财产><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/vagrant/.ssh/id_rsa</value></属性><财产><name>dfs.ha.automatic-failover.enabled.auto-ha</name><值>真</值></属性><财产><name>ha.zookeeper.quorum</name><值>172.16.8.191:2181,172.16.8.192:2181,172.16.8.193:2181</value></属性></配置>核心站点.xml
<配置><财产><name>fs.default.name</name><value>hdfs://auto-ha</value></属性></配置>
zoo.cfg
tickTime=2000dataDir=/opt/ZooData客户端端口=2181初始限制=5同步限制=2server.1=172.16.8.191:2888:3888server.2=172.16.8.192:2888:3888server.3=172.16.8.193:2888:3888
在文件 hdfs-site.xml 中:
*我已经更改了机器名称的所有 IP.例子:172.16.8.191 --> machine_Name1
然后在文件 etc/hosts 中:
*我已经添加了所有具有各自名称的 IP
现在它工作得很好.
I am new to hadoop, hdfs.. I have do the next steps:
I have started zookeeper in the three namenodes:
*vagrant@172:~$ zkServer.sh start
I can see the status:
*vagrant@172:~$ zkServer.sh status
Result Status:
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
with jps command only appear jps and sometimes appear quaroom too:
*vagrant@172:~$ jps
2237 Jps
When I run the next command.
* vagrant@172:~$ hdfs zkfc -formatZK
16/01/07 16:10:09 INFO zookeeper.ClientCnxn: Opening socket connection to server 172.16.8.192/172.16.8.192:2181. Will not attempt to authenticate using SASL (unknown error)
16/01/07 16:10:10 INFO zookeeper.ClientCnxn: Socket connection established to 172.16.8.192/172.16.8.192:2181, initiating session
16/01/07 16:10:11 INFO zookeeper.ClientCnxn: Session establishment complete on server 172.16.8.192/172.16.8.192:2181, sessionid = 0x2521cd93c970022, negotiated timeout = 6000
Usage: java zkfc [ -formatZK [-force] [-nonInteractive] ]
16/01/07 16:10:11 INFO ha.ActiveStandbyElector: Session connected.
16/01/07 16:10:11 INFO zookeeper.ZooKeeper: Session: 0x2521cd93c970022 closed
16/01/07 16:10:11 INFO zookeeper.ClientCnxn: EventThread shut down
16/01/07 16:10:12 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now
org.apache.hadoop.HadoopIllegalArgumentException: Bad argument: –formatZK
at org.apache.hadoop.ha.ZKFailoverController.badArg(ZKFailoverController.java:251)
at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:214)
at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)
at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172)
at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
Any help for this error would be a great help for me.
My configuration are the next:
bashrc
###JAVA CONFIGURATION###
JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$PATH:$JAVA_HOME/bin
###HADOOP CONFIGURATION###
HADOOP_PREFIX=/opt/hadoop-2.7.1/
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
###ZOOKEPER###
export PATH=$PATH:/opt/zookeeper-3.4.6/bin
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>auto-ha</value>
</property>
<property>
<name>dfs.ha.namenodes.auto-ha</name>
<value>nn01,nn02</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn01</name>
<value>172.16.8.191:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn01</name>
<value>172.16.8.191:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.auto-ha.nn02</name>
<value>172.16.8.192:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.auto-ha.nn02</name>
<value>172.16.8.192:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://172.16.8.191:8485;172.16.8.192:8485;172.16.8.193:8485/auto-ha</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hdfs/journalnode</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/vagrant/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.auto-ha</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>172.16.8.191:2181,172.16.8.192:2181,172.16.8.193:2181</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://auto-ha</value>
</property>
</configuration>
zoo.cfg
tickTime=2000
dataDir=/opt/ZooData
clientPort=2181
initLimit=5
syncLimit=2
server.1=172.16.8.191:2888:3888
server.2=172.16.8.192:2888:3888
server.3=172.16.8.193:2888:3888
In file hdfs-site.xml:
*I have change all the IPs for the name of the machine. Example: 172.16.8.191 --> machine_Name1
Then in file etc/hosts:
*I have add all the IPs with their respective names
And now it is working perfectly.
这篇关于执行 hdfs zkfc 命令时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!