为什么 Hadoop 报告“Unhealthy Node local-dirs and log-dirs are bad"? [英] Why does Hadoop report "Unhealthy Node local-dirs and log-dirs are bad"?
问题描述
我正在尝试在我的 PC 上设置一个单节点 Hadoop 2.6.0 集群.
I am trying to setup a single-node Hadoop 2.6.0 cluster on my PC.
在访问 http://localhost:8088/cluster 时,我发现我的节点被列为不健康的节点".
On visiting http://localhost:8088/cluster, I find that my node is listed as an "unhealthy node".
在健康报告中,它提供了错误:
In the health report, it provides the error:
1/1 local-dirs are bad: /tmp/hadoop-hduser/nm-local-dir;
1/1 log-dirs are bad: /usr/local/hadoop/logs/userlogs
怎么了?
推荐答案
local-dirs are bad
最常见的原因是节点上的可用磁盘空间超过了yarn的max-disk-utilization-per-disk-percentage
默认值 90.0%
.
The most common cause of local-dirs are bad
is due to available disk space on the node exceeding yarn's max-disk-utilization-per-disk-percentage
default value of 90.0%
.
要么清理运行不健康节点的磁盘,要么在yarn-site.xml
Either clean up the disk that the unhealthy node is running on, or increase the threshold in yarn-site.xml
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>98.5</value>
</property>
避免禁用磁盘检查,因为当磁盘最终耗尽空间或存在权限问题时,您的作业可能会失败.参考 yarn-site.xml 磁盘检查器部分了解更多详情.
Avoid disabling disk check, because your jobs may failed when the disk eventually run out of space, or if there are permission issues. Refer to the yarn-site.xml Disk Checker section for more details.
如果您怀疑目录中存在文件系统错误,您可以通过运行来检查
If you suspect there is filesystem error on the directory, you can check by running
hdfs fsck /tmp/hadoop-hduser/nm-local-dir
这篇关于为什么 Hadoop 报告“Unhealthy Node local-dirs and log-dirs are bad"?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!