驱动程序命令关闭后,Spark工作者停止了工作 [英] Spark workers stopped after driver commanded a shutdown

查看:107
本文介绍了驱动程序命令关闭后,Spark工作者停止了工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

基本上,主节点也充当从节点之一.一旦主服务器上的从服务器完成,它将调用SparkContext停止,因此此命令将传播到所有从服务器,这些从服务器在处理过程中停止执行.

Basically, Master node also perform as a one of the slave. Once slave on master completed it called the SparkContext to stop and hence this command propagate to all the slaves which stop the execution in mid of the processing.

错误登录其中一个工作程序:

Error log in one of the worker:

INFO SparkHadoopMapRedUtil:try_201612061001_0008_m_000005_18112:已提交

INFO SparkHadoopMapRedUtil: attempt_201612061001_0008_m_000005_18112: Committed

INFO执行程序:在阶段8.0(TID 18112)中完成了任务5.0. 2536字节结果发送给驱动程序

INFO Executor: Finished task 5.0 in stage 8.0 (TID 18112). 2536 bytes result sent to driver

INFO CoarseGrainedExecutorBackend:驱动程序命令关闭

INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown

错误CoarseGrainedExecutor后端:收到的信号终止

ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERMtdown

推荐答案

检查资源管理器的用户界面,以防万一执行程序失败-有关内存错误的详细信息.但是,如果执行程序没有失败,但仍要求驱动程序关闭-通常这是由于驱动程序内存所致,请尝试增加驱动程序内存.让我知道事情的后续.

Check your resource manager user interface, in case you see any executor failed - it details about memory error. However if executor has not failed but still driver called for shut down - usually this is due to driver memory, please try to increase driver memory. Let me know how it goes.

这篇关于驱动程序命令关闭后,Spark工作者停止了工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆