纱罐lauch失败异常和mapred-site.xml配置 [英] Yarn container lauch failed exception and mapred-site.xml configuration

查看:152
本文介绍了纱罐lauch失败异常和mapred-site.xml配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的Hadoop集群中有7个节点[8GB RAM和4VCPU到每个节点],1个Namenode + 6 datanodes。

EDIT-1 @ ARNON :我遵循链接,根据我的节点上的硬件配置疯狂计算,并在我的问题中添加了更新mapred-site和yarn-site.xml文件。我的应用程序崩溃时使用了相同的缓存



我的mapreduce应用程序有34个输入拆分,块大小为128MB。

mapred-site.xml 具有以下属性:

  mapreduce.framework.name = yarn 
mapred.child.java.opts = -Xmx2048m
mapreduce.map.memory.mb = 4096
mapreduce.map.java.opts = -Xmx2048m
$ b yarn-site.xml 具有以下属性:

  yarn.resourcemanager.hostname = hadoop-master 
yarn.nodemanager.aux-services = mapreduce_shuffle
yarn.nodemanager.resource.memory-mb = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144

$ b $编辑-2 @ ARNON:将yarn.scheduler.minimum-allocation-mb设置为4096,将所有地图任务置于挂起状态,并将其分配为3072次崩溃follwoing

 异常来自container-launch:ExitCodeException exitCode = 134:/ bin / bash:第1行:3876异常终止(核心转储) / usr / lib / jvm / java-7 -openjdk-amd64 / bin / java -Djava.net.preferIPv4Stack = true -Dhadoop.metrics.log.level = WARN -Xmx8192m -Djava.io.tmpdir = / tmp / hadoop- ubuntu / nm-local-dir / usercache / ubuntu / appcache / application_1424264025191_0002 / container_1424264025191_0002_01_000011 / tmp -Dlog4j.configuration = container-log4j.properties -Dyarn.app.container.log.dir = / home / ubuntu / hadoop / logs / userlogs / application_1424264025191_0002 / container_1424264025191_0002_01_000011 
-Dyarn.app.container.log.filesize = 0 -Dhadoop.root.logger = INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11>
/ home / ubuntu / hadoop / logs / userlogs / application_1424264025191_0002 / container_1424264025191_0002_01_000011 / stdout 2>
/ home / ubuntu / hadoop / logs / userlogs / application_1424264025191_0002 / container_1424264025191_0002_01_000011 / stderr

可以避免这种情况?任何帮助表示赞赏



是否有限制hadoop ndoes上容器数量的选项?

解决方案

看起来你正在为你的任务分配太多的内存(即使没有查看所有配置)8GB RAM和8GB每个映射任务,所有这些都是堆
尝试使用较低的2Gb分配1GB堆或类似的东西


I have 7 nodes in my Hadoop cluster [8GB RAM and 4VCPUs to each nodes], 1 Namenode + 6 datanodes.

EDIT-1@ARNON: I followed the link, mad calculation according to the hardware configruation on my nodes and have added the update mapred-site and yarn-site.xml files in my question. Still my application is crashing with the same exection

My mapreduce application has 34 input splits with a block size of 128MB.

mapred-site.xml has the following properties:

mapreduce.framework.name  = yarn
mapred.child.java.opts    = -Xmx2048m
mapreduce.map.memory.mb   = 4096
mapreduce.map.java.opts   = -Xmx2048m

yarn-site.xml has the following properties:

yarn.resourcemanager.hostname        = hadoop-master
yarn.nodemanager.aux-services        = mapreduce_shuffle
yarn.nodemanager.resource.memory-mb  = 6144
yarn.scheduler.minimum-allocation-mb = 2048
yarn.scheduler.maximum-allocation-mb = 6144

EDIT-2@ARNON: Setting yarn.scheduler.minimum-allocation-mb to 4096 puts all the map task in suspended state and assigning it as 3072 crashes with the follwoing

Exception from container-launch: ExitCodeException exitCode=134: /bin/bash: line 1:  3876 Aborted  (core dumped) /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx8192m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11 > 
/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stdout 2> 
/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stderr

How can avoid this?any help is appreciated

Is there an option to restrict number of containers on hadoop ndoes?

解决方案

It seems you are allocating too much memory your tasks (even without looking at all the configurations) 8GB RAM and 8GB per map task and all of which is heap Try to use lower allocations 2Gb with 1GB heap or something like that

这篇关于纱罐lauch失败异常和mapred-site.xml配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆