如何使执行人运行星火计划通过--num,执行者? [英] how to make executors run spark program by using --num-executors?

查看:167
本文介绍了如何使执行人运行星火计划通过--num,执行者?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经设置--num-执行人4运行我的星火计划项目四个节点,
但问题是,只有两个正在运行时,其它两种计算机不做任何运算,这里是:
Executor_ID地址...... Total_Task Task_Time输入
1 slave8 88 21.5s 104MB
2 slave6 0 0 0B
3 SLAVE1 88 1分钟99.4MB
4 SLAVE2 0 0 0B

我怎样才能使经营我的星火计划所有这些四个节点?


解决方案

我猜你纱线运行。在这种情况下,你需要设置

<$p$p><$c$c>yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator

在能力scheduler.xml文件。见<一href=\"http://stackoverflow.com/questions/29964792/apache-hadoop-yarn-underutilization-of-cores\">Apache Hadoop的纱线 - 核心未得到充分利用。否则YARN将只推出2执行人不管你用规定什么 - NUM-执行人标志

i have four nodes to run my spark program by set --num-executors 4 , but the problem is that only two is running ,other two computer do not do any computation ,here is : Executor_ID Address ......Total_Task Task_Time Input 1 slave8 88 21.5s 104MB 2 slave6 0 0 0B 3 slave1 88 1min 99.4MB 4 slave2 0 0 0B

how can i make all these four nodes to run my spark program??

解决方案

I'm guessing that you run on YARN. In that case, you need to set

yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator 

in the capacity-scheduler.xml file. See Apache Hadoop Yarn - Underutilization of cores. Otherwise YARN will only launch 2 executors no matter what you specify with the --num-executors flag.

这篇关于如何使执行人运行星火计划通过--num,执行者?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆