从属机器上的DiskErrorException - Hadoop多节点 [英] DiskErrorException on slave machine - Hadoop multinode

查看:131
本文介绍了从属机器上的DiskErrorException - Hadoop多节点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试处理来自hadoop的XML文件,在调用XML文件的字数统计作业时出现以下错误。

I am trying to process XML files from hadoop, i got following error on invoking word-count job on XML files .

13/07/25 12:39:57 INFO mapred.JobClient: Task Id : attempt_201307251234_0001_m_000008_0, Status : FAILED
Too many fetch-failures
13/07/25 12:39:58 INFO mapred.JobClient:  map 99% reduce 0%
13/07/25 12:39:59 INFO mapred.JobClient:  map 100% reduce 0%
13/07/25 12:40:56 INFO mapred.JobClient: Task Id : attempt_201307251234_0001_m_000009_0, Status : FAILED
Too many fetch-failures
13/07/25 12:40:58 INFO mapred.JobClient:  map 99% reduce 0%
13/07/25 12:40:59 INFO mapred.JobClient:  map 100% reduce 0%
13/07/25 12:41:22 INFO mapred.JobClient:  map 100% reduce 1%
13/07/25 12:41:57 INFO mapred.JobClient: Task Id : attempt_201307251234_0001_m_000015_0, Status : FAILED
Too many fetch-failures
13/07/25 12:41:58 INFO mapred.JobClient:  map 99% reduce 1%
13/07/25 12:41:59 INFO mapred.JobClient:  map 100% reduce 1%
13/07/25 12:42:57 INFO mapred.JobClient: Task Id : attempt_201307251234_0001_m_000014_0, Status : FAILED
Too many fetch-failures
13/07/25 12:42:58 INFO mapred.JobClient:  map 99% reduce 1%
13/07/25 12:42:59 INFO mapred.JobClient:  map 100% reduce 1%
13/07/25 12:43:22 INFO mapred.JobClient:  map 100% reduce 2%

观察者在slave机器上的hadoop-hduser-tasktracker-localhost.localdomain.log文件中发生错误。

i observer following error at hadoop-hduser-tasktracker-localhost.localdomain.log file on slave machine .

2013-07-25 12:38:58,124 WARN org.apache.hadoop.mapred.TaskTracker: getMapOutput(attempt_201307251234_0001_m_000001_0,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/hduser/jobcache/job_201307251234_0001/attempt_201307251234_0001_m_000001_0/output/file.out.index in any of the configured local directories
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)

当我运行文本文件时,这工作正常

This works fine when i ran for text files

推荐答案

看起来您已遇到此问题。无论是应用补丁还是下载固定版本,您都应该乐于助人。

Looks like you have hit this issue. Either apply the patch or download the fixed version, and you should be good to go.

HTH

这篇关于从属机器上的DiskErrorException - Hadoop多节点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆