该目录已经被锁定了hadoop [英] the directory is already locked hadoop

查看:767
本文介绍了该目录已经被锁定了hadoop的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述


2015-09-04 08:49: 05,648错误org.apache.hadoop.hdfs.server.common.Storage:看起来另一个节点854 @ ip-1-2-3-4已经锁定了存储目录:/ mnt / xvdb / tmp / dfs / namesecondary
java.nio.channels.OverlappingFileLockException

位于org.apache.hadoop.hdfs.server.common.Storage $ StorageDirectory.tryLock(Storage.java:712)
at org.apache.hadoop.hdfs.server.common.Storage $ StorageDirectory.lock(Storage.java:678)
at org.apache.hadoop.hdfs.server.common.Storage $ StorageDirectory.analyzeStorage(Storage。
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode $ CheckpointStorage.recoverCreate(SecondaryNameNode.java:962)
at org.apache.hadoop.hdfs.server.namenode。 SecondaryNameNode.initialize(SecondaryNameNode.java:243)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode。(Secon daryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
2015-09-04 08:49:05,650 INFO org。 apache.hadoop.hdfs.server.common.Storage:无法锁定storage / mnt / xvdb / tmp / dfs / namesecondary。该目录已被锁定
2015-09-04 08:49:05,650致命org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:无法启动辅助名称节点
java.io .IOException:无法锁定存储/ mnt / xvdb / tmp / dfs / namesecondary。该目录已被锁定
在org.apache.hadoop.hdfs.server.common.Storage $ StorageDirectory.lock(Storage.java:683)
在org.apache.hadoop.hdfs.server.common .Storage $ StorageDirectory.analyzeStorage(Storage.java:499)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode $ CheckpointStorage.recoverCreate(SecondaryNameNode.java:962)
at org.apache .hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:243)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode。(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
2015-09-04 08:49:05,652 INFO org.apache.hadoop.util.ExitUtil:以状态退出1
2015-09-04 08:49:05,653 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:SHUTDOWN_MSG:
/ ************* ***********************************************
SHUTDOWN_MSG:关闭SecondaryNa meNode at ip- @ ip-1-2-3-4 / @ ip-1-2-3-4
********************* *************************************** /


Hadoop版本:2.7.1(3节点群集)

hdfs -site.xml配置文件:

 < configuration> 
<属性>
<名称> dfs.data.dir< /名称>
<值> / mnt / xvdb / hadoop / dfs / data< /值>
< final> true< / final>
< / property>
<属性>
<名称> dfs.name.dir< /名称>
<值> / mnt / xvdb / hadoop / dfs / name< /值>
< final> true< / final>
< / property>
<属性>
< name> dfs.replication< / name>
<值> 3< /值>
< / property>
< / configuration>

我尝试过格式化名称节点,但它没有帮助。谁能帮我解决这个问题?

.blogspot.in / 2014/10 / hadoop-initialization-failed-for-block.htmlrel =nofollow> http://misconfigurations.blogspot.in/2014/10/hadoop-initialization-failed-for-block .html



如果还有其他解决方案,想看看。



PS:我删除了dfs.datanode.data.dir指出的目录,它已经清除了HDFS上的所有数据,但帮助我解决了这个问题。因此,如果有任何问题,您可以使用其他方式来解决此问题。

I am getting below error while starting hadoop:

2015-09-04 08:49:05,648 ERROR org.apache.hadoop.hdfs.server.common.Storage: It appears that another node 854@ip-1-2-3-4 has already locked the storage directory: /mnt/xvdb/tmp/dfs/namesecondary java.nio.channels.OverlappingFileLockException at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:712) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:678) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:499) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:962) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:243) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671) 2015-09-04 08:49:05,650 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /mnt/xvdb/tmp/dfs/namesecondary. The directory is already locked 2015-09-04 08:49:05,650 FATAL org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode java.io.IOException: Cannot lock storage /mnt/xvdb/tmp/dfs/namesecondary. The directory is already locked at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:683) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:499) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:962) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:243) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671) 2015-09-04 08:49:05,652 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2015-09-04 08:49:05,653 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down SecondaryNameNode at ip-@ip-1-2-3-4/@ip-1-2-3-4 ************************************************************/

Hadoop version: 2.7.1(3 node cluster)

hdfs-site.xml configuration file:

<configuration>
<property>
<name>dfs.data.dir</name>
<value>/mnt/xvdb/hadoop/dfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/mnt/xvdb/hadoop/dfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>

I have tried formatting name node as well, but it didn't help. Can anyone help me with this?

解决方案

I found solution to above problem here : http://misconfigurations.blogspot.in/2014/10/hadoop-initialization-failed-for-block.html

If there is any other solution,would like to have a look.

P.S: I have deleted the directory pointed out by "dfs.datanode.data.dir" and it has erased all data on HDFS but helped me to fix the issue. So You can use an alternate way, if has any, for fixing this issue.

这篇关于该目录已经被锁定了hadoop的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆