Hadoop错误 - 所有数据节点正在中止 [英] Hadoop Error - All data nodes are aborting

查看:498
本文介绍了Hadoop错误 - 所有数据节点正在中止的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Hadoop 2.3.0版本。有时当我执行Map Reduce作业时,会显示下面的错误。

  14/08/10 12:14:59信息mapreduce.Job:任务ID:attempt_1407694955806_0002_m_000780_0,状态:FAILED 
错误:java.io.IOException:所有datanodes 192.168.30.2:50010都不好。正在取消...
at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.processDatanodeError(DFSOutputStream。 java:838)
at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run(DFSOutputStream.java:483)


当我尝试检查这些失败任务的日志文件时,此任务的日志文件夹将为空。



I我无法理解这个错误背后的原因。有人可以让我知道如何解决这个问题。感谢您的帮助。

解决方案

您似乎正在打开用户的打开文件句柄限制。这个
是一个非常常见的问题,在大多数情况下可以通过
来增加ulimit值(默认情况下,其大部分为1024,很容易通过像你这样的多工作来完成
) / p>

您可以按照以下简短指南来增加它:
http://blog.cloudera.com/blog/2009/03/configuration-parameters-what-c​​an-you-just-ignore/
[文件描述符限制]]



由Harsh J答复 - https://groups.google.com/a/cloudera.org/forum/#!topic/cdh- user / kJRUkVxmfhw

I am using Hadoop 2.3.0 version. Sometimes when I execute the Map reduce job, the below errors will get displayed.

14/08/10 12:14:59 INFO mapreduce.Job: Task Id : attempt_1407694955806_0002_m_000780_0, Status : FAILED
Error: java.io.IOException: All datanodes 192.168.30.2:50010 are bad. Aborting...
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1023)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:838)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:483)


When I try to check the log files for these failed tasks, the log folder for this task will be empty.

I am not able to understand the reason behind this error. Could someone please let me know how to resolve this issue. Thanks for your help.

解决方案

You seem to be hitting the open file handles limit of your user. This is a pretty common issue, and can be cleared in most cases by increasing the ulimit values (its mostly 1024 by default, easily exhaustible by multi-out jobs like yours).

You can follow this short guide to increase it: http://blog.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/ [The section "File descriptor limits"]

Answered by Harsh J - https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/kJRUkVxmfhw

这篇关于Hadoop错误 - 所有数据节点正在中止的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆