为什么Hadoop报告“不健康节点本地目录和日志目录不好”? [英] Why does Hadoop report "Unhealthy Node local-dirs and log-dirs are bad"?

查看:973
本文介绍了为什么Hadoop报告“不健康节点本地目录和日志目录不好”?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在访问 http:// localhost:8088 / cluster ,我发现我的节点被列为不健康节点。

在健康报告中,它提供了错误:

  1/1 local-dirs很糟糕:/ tmp / hadoop -hduser /纳米本地-DIR; 
1/1 log-dirs很糟糕:/ usr / local / hadoop / logs / userlogs

有什么不对?

解决方案

local-dirs最常见的原因是是由于节点上的可用磁盘空间超过了 max-disk-utilization-per-disk-percentage 90.0%



清理运行不健康节点的磁盘,或者增加 yarn-site.xml

 <属性> 
< name> yarn.nodemanager.disk-health-checker.max -disk-utilization-per-disk-percentage< / name>
<值> 98.5< /值>
< / property>

避免禁用磁盘检查,因为当磁盘最终用完空间时,作业可能失败,或者如果有许可问题。请参阅 yarn-site.xml磁盘检查器部分了解更多详情。



FSCK



如果您怀疑目录上存在文件系统错误,可以运行

  hdfs fsck / tmp / hadoop-hduser / nm-local-dir 


I am trying to setup a single-node Hadoop 2.6.0 cluster on my PC.

On visiting http://localhost:8088/cluster, I find that my node is listed as an "unhealthy node".

In the health report, it provides the error:

1/1 local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir; 
1/1 log-dirs are bad: /usr/local/hadoop/logs/userlogs

What's wrong?

解决方案

The most common cause of local-dirs are bad is due to available disk space on the node exceeding yarn's max-disk-utilization-per-disk-percentage default value of 90.0%.

Either clean up the disk that the unhealthy node is running on, or increase the threshold in yarn-site.xml

<property>
  <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
  <value>98.5</value>
</property>

Avoid disabling disk check, because your jobs may failed when the disk eventually run out of space, or if there are permission issues. Refer to the yarn-site.xml Disk Checker section for more details.

FSCK

If you suspect there is filesystem error on the directory, you can check by running

hdfs fsck /tmp/hadoop-hduser/nm-local-dir

这篇关于为什么Hadoop报告“不健康节点本地目录和日志目录不好”?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆