java.io.IOException:不兼容的clusterID [英] java.io.IOException: Incompatible clusterIDs

查看:410
本文介绍了java.io.IOException:不兼容的clusterID的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在安装Hadoop 2.7.2(1个主NN-1秒NN-3 datanode),无法启动datanode!
在大声呼救日志之后(见下文),致命错误是由于ClusterID不匹配造成的......容易!只需更改ID。
错误 ...当我在NameNode和DataNode上检查我的VERSION文件时,它们是相同的。 很简单:INTO日志文件 - > NameNode的ClusterID来自何处?



记录档案:




  WARN org.apache.hadoop.hdfs.server.common.Storage:java.io.IOException:/ home / hduser / mydata / hdfs / datanode中的不兼容clusterID:namenode clusterID = ** CID-8e09ff25-80fb-4834 -878b-f23b3deb62d0 **; datanode clusterID = ** CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1 ** 
org.apache.hadoop.hdfs.server.datanode.DataNode:块池初始化失败<注册> (Datanode Uuid未分配)服务master / 172.XX.XX.XX:9000。退出。
java.io.IOException:所有指定的目录都无法加载。
atorg.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java: 1358)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323)
atorg.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService。 java:317)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.run( BPServiceActor.java:802)
在java.lang.Thread.run(Thread.java:745)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode:结束以下块池服务:块池<注册> (Datanode Uuid unassigned)服务master / 172.XX.XX.XX:9000
INFO org.apache.hadoop.hdfs.server.datanode.DataNode:已删除Block pool< registering> (Datanode Uuid unassigned)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode:退出Datanode

版本文件副本




主人

  storageID = DS-f72f5710-a869-489d-9f52-40dadc659937 
clusterID = CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime = 0
datanodeUuid = 54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType = DATA_NODE
layoutVersion = -56
$ b

DataNode

  storageID = DS-f72f5710-a869-489d-9f52-40dadc659937 
clusterID = CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime = 0
datanodeUuid = 54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType = DATA_NODE
layoutVersion = -56


解决方案



MASTER 上,和第二个Namenode Namenode VERSION文件位于〜/.../ namenode / current / VERSION中。

但对于 DATANODES ,路径不同。它应该看起来像这样〜〜/.../ datanode / current / VERSION



2个VERSION文件之间的ClusterID应该是相同的 p>

希望它有帮助!


I am installing Hadoop 2.7.2 (1 master NN -1 second NN-3 datanode) and cannot start the datanodes!!! After trouble shouting the logs (see below), the fatal error is due to ClusterID mismatch... easy! just change the IDs. WRONG... when I check my VERSION files on the NameNode and the DataNodes they are identical..

So the question is simple: INTO the log file --> Where the ClusterID of the NameNode is coming From????

LOG FILE:


WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /home/hduser/mydata/hdfs/datanode: namenode clusterID = **CID-8e09ff25-80fb-4834-878b-f23b3deb62d0**; datanode clusterID = **CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1**
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to master/172.XX.XX.XX:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
atorg.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1358)
atorg.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1323)
atorg.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
atorg.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:745)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to master/172.XX.XX.XX:9000
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode

COPY of THE VERSION FILE


the master

storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56

THE DataNode

storageID=DS-f72f5710-a869-489d-9f52-40dadc659937
clusterID=CID-cd85e59a-ed4a-4516-b2ef-67e213cfa2a1
cTime=0
datanodeUuid=54bc8b80-b84f-4893-8b96-36568acc5d4b
storageType=DATA_NODE
layoutVersion=-56

解决方案

Just to summarize (and close) this issue, I would like to share how I fixed this issue.

On the MASTER and the 2nd Namenode the Namenode VERSION file is under ~/.../namenode/current/VERSION.

BUT for DATANODES the path is different. it should look something like this ~/.../datanode/current/VERSION

ClusterIDs between the 2 VERSION files should be identical

Hope it helps!

这篇关于java.io.IOException:不兼容的clusterID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆