无法为任务tryId NNN分配容器 [英] Could not deallocate container for task attemptId NNN

查看:497
本文介绍了无法为任务tryId NNN分配容器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图了解容器如何根据不同的硬件配置在YARN中分配内存及其性能.

I'm trying to understand how the container allocates memory in YARN and their performance based on different hardware configuration.

因此,该机器有30 GB RAM,我为YARN选择了24 GB,为系统保留6 GB.

So, the machine has 30 GB RAM and I picked 24 GB for YARN and leave 6 GB for the system.

yarn.nodemanager.resource.memory-mb=24576

然后,我跟随 http ://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html 提出一些Map&减少任务内存.

Then I followed http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html to come up with some vales for Map & Reduce tasks memory.

我将这两个保留为默认值

I leave these two to their default value:

mapreduce.map.memory.mb
mapreduce.map.java.opts

但是我更改了这两个配置:

But I change these two configuration:

mapreduce.reduce.memory.mb=20480
mapreduce.reduce.java.opts=Xmx16384m

但是当我使用该设置放置作业时,出现错误,并且该作业被强行杀死:

But when I place a job with that setting, I'm getting error and the job is killed by force:

2015-03-10 17:18:18,019 ERROR [Thread-51] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1426006703004_0004_r_000000_0

到目前为止,唯一对我有用的值是设置reducer memory< = 12 GB,但这为什么呢?为什么我不能分配更多的内存或最多(每个容器2 * RAM?

The only value which worked for me so far is setting reducer memory <= 12 GB, but why is that? Why I cannot allocate more memory or up to (2 * RAM-per-container?

那么我在这里想念的是什么?设置这些值以获得更好的性能时,我还需要考虑什么?

So what I'm missing here? Is there any thing I need to consider as well while setting up those values for better performance?

推荐答案

通过更改 yarn.scheduler.maximum-allocation-mb 值解决了此问题.在YARN中,作业使用的内存不得超过服务器端配置yarn.scheduler.maximum-allocation-mb.尽管我为yarn.nodemanager.resource.memory-mb设置了值,但它也应反映最大分配大小.因此,在更新了最大分配后,作业按预期工作了:

Resolved this issue by changing yarn.scheduler.maximum-allocation-mb value. In YARN, jobs must not use more memory than the server-side config yarn.scheduler.maximum-allocation-mb. Though I set the value for yarn.nodemanager.resource.memory-mb but it should also reflect the maximum allocation size. So after updating maximum allocation, job worked as expected:

yarn.nodemanager.resource.memory-mb=24576
yarn.scheduler.maximum-allocation-mb=24576

这篇关于无法为任务tryId NNN分配容器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆