为什么spark.executor.instances不起作用? [英] why does not spark.executor.instances work?
问题描述
我正在使用40个r4.2xlarge从属设备和一个具有相同类型主机的主设备. r4.2xlarge具有8个内核,具有61GB内存.
I'm using 40 r4.2xlarge slaves and one master with the same type host. r4.2xlarge has 8 cores with 61GB Memory.
给出的设置是:
- spark.executor.instances 280
- spark.executor.cores 1
- spark.executor.memory 8G
- spark.driver.memory 40G
- spark.yarn.executor.memory开销为10240
- spark.dynamicAllocation.enabled假
当观察在其Ganglia中使用此群集运行的作业时,总体cpu使用率仅为30%左右.其资源管理器按执行者汇总的指标"表显示每个从属节点只有两个执行者.
When observing a job running with this cluster in its Ganglia, overall cpu usage is around 30% only. and its resource manager "Aggregated Metrics by Executor" table shows only two executors per slave node.
为什么即使有280个spark.executor.instances,该群集每个从属节点也只运行两个执行程序?
Why does this cluster run only two executors per slave node even with 280 spark.executor.instances?
----更新----
---- UPDATE ----
我在/etc/hadoop/conf.empty下找到yarn-site.xml
I found the yarn-site.xml under /etc/hadoop/conf.empty
- yarn.scheduler.maximum-allocation-mb 54272
- yarn.scheduler.maximum-allocation-vcores 128
- yarn.nodemanager.resource.cpu-vcores 8
- yarn.nodemanager.resource.memory-mb 54272
推荐答案
如果您正在YARN上运行作业,则执行程序的数量不仅由此参数分配,而且其数量取决于该参数中的某些配置参数.纱.可能的参数是:
If you are running job on the YARN, the number of executors is not only allocate by this parameter, but a number that depends on the some configuration parameters in the YARN. Possibly parameters are:
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.maximum-allocation-vcores
yarn.nodemanager.resource.cpu-vcores
yarn.nodemanager.resource.memory-mb
请检查yarn-site.xml中的参数
Please check that parameters in yarn-site.xml
这篇关于为什么spark.executor.instances不起作用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!