datanode和namenode中不兼容的clusterID [英] Incompatible clusterIDs in datanode and namenode

查看:119
本文介绍了datanode和namenode中不兼容的clusterID的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在此站点上检查了解决方案.

I checked solutions in this site.

我去了(hadoop文件夹)/data/dfs/datanode来更改ID.

I went to the (hadoop folder)/data/dfs/datanode to change ID.

但是,datanode文件夹中没有任何内容.

but, there are not anything in datanode folder.

我该怎么办?

感谢阅读.

如果您能帮助我,我将不胜感激.

And If you help me, I will be appreciate you.

PS

2017-04-11 20:24:05,507警告org.apache.hadoop.hdfs.server.common.Storage:无法添加存储目录[DISK]文件:/tmp/hadoop-knu/dfs/data/

2017-04-11 20:24:05,507 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-knu/dfs/data/

java.io.IOException:/tmp/hadoop-knu/dfs/data中不兼容的clusterID:namenode clusterID = CID-4491e2ea-b0dd-4e54-a37a-b18aaaf5383b; datanode clusterID = CID-13a3b8e1-2f8e-4dd2-bcf9-c602420c1d3d

java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-knu/dfs/data: namenode clusterID = CID-4491e2ea-b0dd-4e54-a37a-b18aaaf5383b; datanode clusterID = CID-13a3b8e1-2f8e-4dd2-bcf9-c602420c1d3d

2017-04-11 20:24:05,509严重org.apache.hadoop.hdfs.server.datanode.DataNode:初始化对池池(未分配Datanode Uuid)服务到localhost/127.0.0.1:9010的失败.正在退出.

2017-04-11 20:24:05,509 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9010. Exiting.

java.io.IOException:所有指定目录加载失败.

java.io.IOException: All specified directories are failed to load.

2017-04-11 20:24:05,509警告org.apache.hadoop.hdfs.server.datanode.DataNode的终止块池服务:向localhost/127.0.0.1:9010的块池(未分配Datanode Uuid)服务

2017-04-11 20:24:05,509 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9010

core-site.xml

<configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9010</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
   <property>
            <name>dfs.replication</name>
            <value>1</value>
   </property>
   <property>
            <name>dfs.namenode.name.dir</name>
            <value>/home/knu/hadoop/hadoop-2.7.3/data/dfs/namenode</value>
    </property>
    <property>
            <name>dfs.namenode.checkpoint.dir</name>
            <value>/home/knu/hadoop/hadoop-2.7.3/data/dfs/namesecondary</value>
    </property>
    <property>
            <name>dfs.dataode.data.dir</name>
            <value>/home/knu/hadoop/hadoop-2.7.3/data/dfs/datanode</value>
    </property>
    <property>
            <name>dfs.http.address</name>
            <value>localhost:50070</value>
    </property>
    <property>
           <name>dfs.secondary.http.address</name>
            <value>localhost:50090</value>
    </property>
</configuration>

PS2

[knu@localhost ~]$ ls -l /home/knu/hadoop/hadoop-2.7.3/data/dfs/
drwxrwxr-x. 2 knu knu  6  4월 11 21:28 datanode
drwxrwxr-x. 3 knu knu 40  4월 11 22:15 namenode
drwxrwxr-x. 3 knu knu 40  4월 11 22:15 namesecondary

推荐答案

问题出在属性名称dfs.datanode.data.dir上,它被误写为dfs.dataode.data.dir.这会使属性无法识别,因此,默认位置${hadoop.tmp.dir}/hadoop-${USER}/dfs/data用作数据目录.

The problem is with the property name dfs.datanode.data.dir, it is misspelt as dfs.dataode.data.dir. This invalidates the property from being recognised and as a result, the default location of ${hadoop.tmp.dir}/hadoop-${USER}/dfs/data is used as data directory.

hadoop.tmp.dir默认为/tmp,每次重新启动时,该目录的内容将被删除,并强制datanode在启动时重新创建该文件夹.因此,不兼容的clusterID .

hadoop.tmp.dir is /tmp by default, on every reboot the contents of this directory will be deleted and forces datanode to recreate the folder on startup. And thus Incompatible clusterIDs.

在格式化名称节点并启动服务之前,请在hdfs-site.xml中编辑此属性名称.

Edit this property name in hdfs-site.xml before formatting the namenode and starting the services.

这篇关于datanode和namenode中不兼容的clusterID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆