为什么EMR上的Yarn不能将所有节点分配给正在运行的Spark作业? [英] Why does Yarn on EMR not allocate all nodes to running Spark jobs?

查看:112
本文介绍了为什么EMR上的Yarn不能将所有节点分配给正在运行的Spark作业?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在Amazon Elastic Map Reduce(EMR)上的Apache Spark上运行作业.目前,我正在emr-4.1.0上运行,其中包括Amazon Hadoop 2.6.0和Spark 1.5.0.

I'm running a job on Apache Spark on Amazon Elastic Map Reduce (EMR). Currently I'm running on emr-4.1.0 which includes Amazon Hadoop 2.6.0 and Spark 1.5.0.

当我开始工作时,YARN正确地将所有工作节点分配给了spark工作(当然,其中一个是驱动程序).

When I start the job, YARN correctly has allocated all the worker nodes to the spark job (with one for the driver, of course).

我将神奇的"maximizeResourceAllocation"属性设置为"true",而spark属性"spark.dynamicAllocation.enabled"也设置为"true".

I have the magic "maximizeResourceAllocation" property set to "true", and the spark property "spark.dynamicAllocation.enabled" also set to "true".

但是,如果我通过将节点添加到工作计算机的CORE池中来调整emr集群的大小,则YARN只会将 some 个新节点添加到spark作业中.

However, if I resize the emr cluster by adding nodes to the CORE pool of worker machines, YARN only adds some of the new nodes to the spark job.

例如,今天早上我有一份工作正在使用26个节点(如果重要,则为m3.2xlarge)-1个驱动程序,25个执行程序.我想加快工作速度,所以我尝试添加8个以上的节点. YARN已拾取所有新节点,但仅将其中1个分配给Spark作业. Spark确实成功地拾取了新节点并将其用作执行程序,但是我的问题是为什么YARN让其他7个节点只是闲置?

For example, this morning I had a job that was using 26 nodes (m3.2xlarge, if that matters) - 1 for the driver, 25 executors. I wanted to speed up the job so I tried adding 8 more nodes. YARN has picked up all of the new nodes, but only allocated 1 of them to the Spark job. Spark did successfully pick up the new node and is using it as an executor, but my question is why is YARN letting the other 7 nodes just sit idle?

这很令人讨厌,原因很明显-即使资源没有被使用,我也必须为此付出代价,而且我的工作根本没有加速!

It's annoying for obvious reasons - I have to pay for the resources even though they're not being used, and my job hasn't sped up at all!

有人知道YARN如何决定何时将节点添加到运行的Spark作业吗?哪些变量起作用?记忆? V核?有什么吗?

Anybody know how YARN decides when to add nodes to running spark jobs? What variables come into play? Memory? V-Cores? Anything?

提前谢谢!

推荐答案

好的,借助于 @sean_r_owen ,我能够对此进行跟踪.

Okay, with the help of @sean_r_owen, I was able to track this down.

问题是这样的:将spark.dynamicAllocation.enabled设置为true时,不应设置spark.executor.instances-该值的显式值将覆盖动态分配并将其关闭.事实证明,如果您自己没有设置,则EMR会在后台设置它.为了获得所需的行为,您需要将spark.executor.instances显式设置为0.

The problem was this: when setting spark.dynamicAllocation.enabled to true, spark.executor.instances shouldn't be set - an explicit value for that will override dynamic allocation and turn it off. It turns out that EMR sets it in the background if you do not set it yourself. To get the desired behaviour, you need to explicitly set spark.executor.instances to 0.

为便于记录,以下是创建EMR群集时传递给--configurations标志的文件之一的内容:

For the records, here is the contents of one of the files we pass to the --configurations flag when creating an EMR cluster:

[
    {
        "Classification": "capacity-scheduler",
        "Properties": {
            "yarn.scheduler.capacity.resource-calculator": "org.apache.hadoop.yarn.util.resource.DominantResourceCalculator"
        }
    },

    {
        "Classification": "spark",
        "Properties": {
            "maximizeResourceAllocation": "true"
        }
    },

    {
        "Classification": "spark-defaults",
        "Properties": {
            "spark.dynamicAllocation.enabled": "true",
            "spark.executor.instances": "0"
        }
    } 
]

这为我们提供了一个EMR集群,其中Spark在运行作业时使用所有节点,包括添加的节点.它似乎也使用了全部/大部分内存和所有(?)内核.

This gives us an EMR cluster where Spark uses all the nodes, including added nodes, when running jobs. It also appears to use all/most of the memory and all (?) the cores.

(我不能完全确定它是否使用了所有实际的内核;但是肯定使用了不止一个的VCore,但是按照Glennie Helles的建议,它现在表现得更好,并且使用了一半列出的VCore,似乎等于实际的内核数量...)

(I'm not entirely sure that it's using all the actual cores; but it is definitely using more than 1 VCore, which it wasn't before, but following Glennie Helles's advice it is now behaving better and using half of the listed VCores, which seems to equal the actual number of cores...)

这篇关于为什么EMR上的Yarn不能将所有节点分配给正在运行的Spark作业?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆