为什么我会得到“太多的读取失败”每个另一天 [英] Why I am getting "Too many fetch-failures" every other day
问题描述
我们发现,在杀了这个工作并保持集群沉默一段时间之后,一切都很顺利。
请建议可能是真正的问题?解决方案,修改datanode节点/ etc / hosts文件。
简要格式的主机:
每行分为三部分:网络IP地址的第一部分,主机名或域名的第二部分,主机别名的第三部分详细步骤如下:
1,首先检查主机名:
cat / proc / sys / kernel / hostname
会看到一个HOSTNAME属性,在OK后面改变IP的值,然后退出。
2,请使用以下命令:
hostname *。 即可。的 。 *
星号被相应的IP替换。
3,修改主机配置类似如下:
127.0.0.1 localhost.localdomain localhost
:: 1 localhost6.localdomain6 localhost6
10.200.187.77 10.200.187.77 hadoop-datanode
如果IP地址配置成功修改或显示主机名存在问题,请继续修改主机文件,
From one or the other Task Trackers i get this error when ever we run two big pig job that crunches about 400 GB of data. We found that after killing the job and keeping the cluster silent for a time, everything goes fine again. Please suggest what could be the real issue ?
Solution, modify datanode node / etc / hosts file. Hosts under brief format: Each line is divided into three parts: the first part of the network IP address, the second part of the host name or domain name, the third part of the host alias detailed steps are as follows: 1, first check the host name:
cat / proc / sys / kernel / hostname
Will see a HOSTNAME attribute, change the value of the IP behind on OK, and then exit. 2, use the command:
hostname *. . . *
Asterisk is replaced by the corresponding IP. 3, modify the the hosts configuration similar as follows:
127.0.0.1 localhost.localdomain localhost :: 1 localhost6.localdomain6 localhost6 10.200.187.77 10.200.187.77 hadoop-datanode
If the IP address is configured on successfully modified, or show host name there is a problem, continue to modify the hosts file,
这篇关于为什么我会得到“太多的读取失败”每个另一天的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!