阿帕奇星火:设置执行人情况不改变遗嘱执行人 [英] Apache Spark: setting executor instances does not change the executors

查看:160
本文介绍了阿帕奇星火:设置执行人情况不改变遗嘱执行人的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个纱集群上运行一个Apache星火应用程序(火花有这个集群3个节点)的群集模式。

I have an Apache Spark application running on a YARN cluster (spark has 3 nodes on this cluster) on cluster mode.

在应用程序运行时的火花UI显示,2执行人(每一个不同的节点上​​运行)和驱动器的第三个节点上运行。
我想用更多的执行者,所以我尝试添加参数--num遗嘱执行人的火花提交并将其设置为6应用程序。

When the application is running the Spark-UI shows that 2 executors (each running on a different node) and the driver are running on the third node. I want the application to use more executors so I tried adding the argument --num-executors to Spark-submit and set it to 6.

火花提交--driver内存3G --num-执行人6 --class main.Application --executor内存11G --master纱线集群myJar.jar<&的arg1 GT; <&ARG2 GT; <&ARG3 GT; ...

不过,遗嘱执行人的数量保持2

However, the number of executors remains 2.

在火花UI我可以看到参数spark.executor.instances为6,正如我预期的,并以某种方式仍然只有2执行者。

On spark UI I can see that the parameter spark.executor.instances is 6, just as I intended, and somehow there are still only 2 executors.

我甚至尝试从code设置此参数

I even tried setting this parameter from the code

sparkConf.set("spark.executor.instances", "6")

同样,我可以看到,参数设置为6,但仍然只有2个执行者。

Again, I can see that the parameter was set to 6, but still there are only 2 executors.

有谁知道为什么我不能提高我的遗嘱执行人的数量?

Does anyone know why I couldn't increase the number of my executors?

yarn.nodemanager.resource.memory-MB是纱的site.xml12克

yarn.nodemanager.resource.memory-mb is 12g in yarn-site.xml

推荐答案

增加 yarn.nodemanager.resource.memory-MB 纱现场的.xml

使用12克每个节点只能启动驱动器(3G)和2执行人(11克)。

With 12g per node you can only launch driver(3g) and 2 executors(11g).

节点1 - 司机3G(+ 7%的开销)

Node1 - driver 3g (+7% overhead)

节点2 - executor111克(+ 7%的开销)

Node2 - executor1 11g (+7% overhead)

节点3 - executor211克(+ 7%的开销)

Node3 - executor2 11g (+7% overhead)

现在你所要求的,11g的executor3没有节点可用内存11克

now you are requesting for executor3 of 11g and no node has 11g memory available.

7%的开销指spark.yarn.executor.memoryOverhead和spark.yarn.driver.memoryOverhead中的 https://spark.apache.org/docs/1.2.0/running-on-yarn.html

for 7% overhead refer spark.yarn.executor.memoryOverhead and spark.yarn.driver.memoryOverhead in https://spark.apache.org/docs/1.2.0/running-on-yarn.html

这篇关于阿帕奇星火:设置执行人情况不改变遗嘱执行人的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆