为什么YARN java堆空间内存错误? [英] Why YARN java heap space memory error?

查看:856
本文介绍了为什么YARN java堆空间内存错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想尝试在YARN中设置内存,所以我会尝试在yarn-site.xml和mapred-site.xml中配置一些参数。顺便说一句,我使用hadoop 2.6.0。但是,当我执行mapreduce作业时出现错误。它是这样说的:

  15/03/12 10:57:23信息mapreduce.Job:任务ID:
attempt_1426132548565_0001_m_000002_0,状态:FAILED
错误:Java堆空间
由ApplicationMaster终止的容器。
容器在请求时死亡。退出代码是143
使用非零退出代码退出的容器143

我认为我已经配置好了,我给map.java.opts和reduce.java.opts设置了小尺寸= 64 MB。我试着配置一些参数,比如改变mapred-site.xml上的map.java.opts和reduce.java.opts,我仍然得到这个错误。我认为我不太了解YARN内存的工作原理。顺便说一下,我尝试使用单节点计算机。

解决方案

Yarn处理资源管理,并提供可以使用MapReduce的批处理工作负载和实时工作负载。

可以在纱线容器级别以及映射器和减速器级别设置内存设置。内存以纱线容器大小的增量请求。 Mapper和Reducer任务在容器中运行。



mapreduce.map.memory.mb和
mapreduce.reduce.memory.mb
上面的参数描述了map-reduce任务的内存上限,并且如果此任务订阅的内存超过此限制,相应的容器将被终止。



这些参数分别决定可分配给映射器和减少任务的最大内存量。
让我们看一个例子:
Mapper受配置参数 mapreduce.map.memory.mb 中定义的内存上限限制。



但是,如果 yarn.scheduler.minimum-allocation-mb 的值大于 mapreduce.map.memory.mb ,那么 yarn.scheduler.minimum-allocation-mb 将得到遵守,并且将给出该大小的容器。



参数需要仔细设置,如果设置不正确,可能会导致性能不佳或OutOfMemory错误。

mapreduce.reduce.java.opts和mapreduce。 map.java.opts



此属性值必须小于 mapreduce.map中定义的map / reduce任务的上限。 memory.mb / mapreduce.reduce.memory.mb ,因为它应该适合map / reduce任务的内存分配。


I want to try about setting memory in YARN, so I'll try to configure some parameter on yarn-site.xml and mapred-site.xml. By the way I use hadoop 2.6.0. But, I get an error when I do a mapreduce job. It says like this :

15/03/12 10:57:23 INFO mapreduce.Job: Task Id :
attempt_1426132548565_0001_m_000002_0, Status : FAILED
Error: Java heap space
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

I think that I have configured it right, I give map.java.opts and reduce.java.opts the small size = 64 MB. I've try to configure some parameter then, like change the map.java.opts and reduce.java.opts on mapred-site.xml, and I still get this error. I think that I do not really understand how YARN memory works. BY the way for this I try on single node computer.

解决方案

Yarn handles resource management and also serves batch workloads that can use MapReduce and real-time workloads.

There are memory settings that can be set at the Yarn container level and also at the mapper and reducer level. Memory is requested in increments of the Yarn container size. Mapper and reducer tasks run inside a container.

mapreduce.map.memory.mb and mapreduce.reduce.memory.mb

above parameters describe upper memory limit for the map-reduce task and if memory subscribed by this task exceeds this limit, the corresponding container will be killed.

These parameters determine the maximum amount of memory that can be assigned to mapper and reduce tasks respectively. Let us look at an example: Mapper is bound by an upper limit for memory which is defined in the configuration parameter mapreduce.map.memory.mb.

However, if the value for yarn.scheduler.minimum-allocation-mb is greater than this value of mapreduce.map.memory.mb, then the yarn.scheduler.minimum-allocation-mb is respected and the containers of that size are given out.

This parameter needs to be set carefully and if not set properly, this could lead to bad performance or OutOfMemory errors.

mapreduce.reduce.java.opts and mapreduce.map.java.opts

This property value needs to be less than the upper bound for map/reduce task as defined in mapreduce.map.memory.mb/mapreduce.reduce.memory.mb, as it should fit within the memory allocation for the map/reduce task.

这篇关于为什么YARN java堆空间内存错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆