fetcher#1 中随机播放的 Hadoop 错误 [英] Hadoop error in shuffle in fetcher#1

查看:26
本文介绍了fetcher#1 中随机播放的 Hadoop 错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在 hadoop 中运行解析作业,源是一个 11GB 的地图文件,其中大约有 900,000 条二进制记录,每个记录代表一个 HTML 文件,地图提取链接并将它们写入上下文.我没有为这项工作编写减速器.

I'm running a parsing job in hadoop, the source is a 11GB map file with about 900,000 binary records each representing an HTML file, the map extract links and write them to the context. I have no reducer written for this job.

  • 当我在较小的文件(大约 5GB 和大约 500,000 条记录)上运行它时,它可以正常工作.
  • 这是一个单机集群
  • 输出有大约 1 亿条记录,TEXT
  • 在计划的 200 个地图任务中有 11 个任务失败.
  • 我正在使用 Hadoop 0.22.0

我收到以下错误:

org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: 错误在 fetcher#1 中随机播放org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:124)在 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362) 在org.apache.hadoop.mapred.Child$4.run(Child.java:223) 在java.security.AccessController.doPrivileged(Native Method) 在javax.security.auth.Subject.doAs(Subject.java:396) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)在 org.apache.hadoop.mapred.Child.main(Child.java:217) 引起:java.lang.OutOfMemoryError:Java 堆空间位于org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:58)在org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:45)在org.apache.hadoop.mapreduce.task.reduce.MapOutput.(MapOutput.java:104)在org.apache.hadoop.mapreduce.task.reduce.MergeManager.unconditionalReserve(MergeManager.java:267)

org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:124) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362) at org.apache.hadoop.mapred.Child$4.run(Child.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.mapred.Child.main(Child.java:217) Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:58) at org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:45) at org.apache.hadoop.mapreduce.task.reduce.MapOutput.(MapOutput.java:104) at org.apache.hadoop.mapreduce.task.reduce.MergeManager.unconditionalReserve(MergeManager.java:267)

这是我的 mapreduce-site.xml:

This is my mapreduce-site.xml:

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>Hadp01:8012</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
<property>
  <name>mapred.local.dir</name>
  <value>/BigData1/MapReduce,/BigData2/MapReduce</value>
</property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx1536m</value>
</property>
<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>2048</value>
</property>
<property>
    <name>mapreduce.task.io.sort.mb</name>
    <value>300</value>
</property>
<property>
    <name>io.sort.mb</name>
    <value>300</value>
</property>
<property>
    <name>mapreduce.task.io.sort.factor</name>
    <value>100</value>
</property>
<property>
    <name>io.sort.factor</name>
    <value>100</value>
</property>
<property>
    <name>tasktracker.http.threads</name>
    <value>80</value>
</property>
</configuration>

有人知道如何解决吗?谢谢!

Anyone has any idea how to fix it? Thank you!

推荐答案

这个错误是由mapreduce.reduce.shuffle.memory.limit.percent引起的,默认情况下

this error caused by mapreduce.reduce.shuffle.memory.limit.percent,by default

mapreduce.reduce.shuffle.memory.limit.percent=0.25

为了解决这个问题,我限制了 reduce 的 shuffle 内存使用:蜂巢:

To resolve this problem, I restrict my reduce's shuffle memory usage: hive:

set mapreduce.reduce.shuffle.memory.limit.percent=0.15;

MapReduce:

job.getConfiguration().setStrings("mapreduce.reduce.shuffle.memory.limit.percent", "0.15");

shuffle 错误解决方法

这篇关于fetcher#1 中随机播放的 Hadoop 错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆