在Apache Hadoop YARN中'mapreduce.map.memory.mb'和'mapred.map.child.java.opts'之间的关系是什么? [英] What is the relation between 'mapreduce.map.memory.mb' and 'mapred.map.child.java.opts' in Apache Hadoop YARN?

查看:1576
本文介绍了在Apache Hadoop YARN中'mapreduce.map.memory.mb'和'mapred.map.child.java.opts'之间的关系是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道 mapreduce.map.memory.mb mapred.map.child.java.opts 之间的关系, code>参数。



mapreduce.map.memory.mb > mapred.map.child.java.opts



谢谢,
Kewal。

默认值为512.
如果超过这个限制,Hadoop将杀死mapper,并显示如下错误:


pid = container_1406552545451_0009_01_000002,containerID = container_234132_0001_01_000001]
超出了物理内存限制。当前使用:569.1 MB的
512 MB物理内存使用;使用的1.0 GB虚拟内存的970.1 MB。
杀死容器。


Hadoop映射器是一个java进程,每个Java进程都有自己的堆内存最大分配设置,通过 mapred.map.child.java.opts (或mapreduce.map.java.opts在Hadoop 2+中)。
如果mapper进程耗尽堆内存,mapper会抛出一个java出来的异常:


错误:java。 lang.RuntimeException:java.lang.OutOfMemoryError


因此,Hadoop和Java设置是相关的。 Hadoop设置更多的是资源执行/控制,Java更多的是资源配置。



Java堆设置应该小于Hadoop容器内存因为我们需要为Java代码保留内存。通常,建议为代码保留20%的内存。因此,如果设置正确,基于Java的Hadoop任务不应该被Hadoop杀死,所以你不应该看到像上面的Killing容器错误。



如果您遇到Java内存溢出错误,您必须同时增加两个内存设置。


I would like to know the relation between the mapreduce.map.memory.mb and mapred.map.child.java.opts parameters.

Is mapreduce.map.memory.mb > mapred.map.child.java.opts?

Thanks, Kewal.

解决方案

mapreduce.map.memory.mb is the upper memory limit that Hadoop allows to be allocated to a mapper, in megabytes. The default is 512. If this limit is exceeded, Hadoop will kill the mapper with an error like this:

Container[pid=container_1406552545451_0009_01_000002,containerID=container_234132_0001_01_000001] is running beyond physical memory limits. Current usage: 569.1 MB of 512 MB physical memory used; 970.1 MB of 1.0 GB virtual memory used. Killing container.

Hadoop mapper is a java process and each Java process has its own heap memory maximum allocation settings configured via mapred.map.child.java.opts (or mapreduce.map.java.opts in Hadoop 2+). If the mapper process runs out of heap memory, the mapper throws a java out of memory exceptions:

Error: java.lang.RuntimeException: java.lang.OutOfMemoryError

Thus, the Hadoop and the Java settings are related. The Hadoop setting is more of a resource enforcement/controlling one and the Java is more of a resource configuration one.

The Java heap settings should be smaller than the Hadoop container memory limit because we need reserve memory for Java code. Usually, it is recommended to reserve 20% memory for code. So if settings are correct, Java-based Hadoop tasks should never get killed by Hadoop so you should never see the "Killing container" error like above.

If you experience Java out of memory errors, you have to increase both memory settings.

这篇关于在Apache Hadoop YARN中'mapreduce.map.memory.mb'和'mapred.map.child.java.opts'之间的关系是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆