Spark:executor.CoarseGrainedExecutor后端:驱动程序解除关联解除关联 [英] Spark:executor.CoarseGrainedExecutorBackend: Driver Disassociated disassociated

查看:229
本文介绍了Spark:executor.CoarseGrainedExecutor后端:驱动程序解除关联解除关联的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习如何使用spark,并且我有一个简单的程序.运行jar文件时,它可以给我正确的结果,但stderr文件中有一些错误,就像这样:

I am learning how to use spark and I have a simple program.When I run the jar file it gives me the right result but I have some error in the stderr file.just like this:

 15/05/18 18:19:52 ERROR executor.CoarseGrainedExecutorBackend: Driver   Disassociated [akka.tcp://sparkExecutor@localhost:51976] -> [akka.tcp://sparkDriver@172.31.34.148:60060] disassociated! Shutting down.
 15/05/18 18:19:52 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@172.31.34.148:60060] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].

您可以在其中获得整个stderr文件:

You can get the whole stderr file in there:

http://172.31. 34.148:8081/logPage/?appId = app-20150518181945-0026& executorId = 0& logType = stderr

我搜索了此问题并找到了它:

I searched this problem and find this:

为什么Spark应用程序失败并显示"executor.CoarseGrainedExecutorBackend:驱动程序已取消关联"?

然后按照我所说的方式打开spark.yarn.executor.memoryOverhead,但它不起作用.

And I turn up the spark.yarn.executor.memoryOverhead as it said but it doesn't work.

我只有一个主节点(8G内存),在spark的从节点文件中只有一个从节点-主节点本身.我这样提交:

I just have one master node(8G memory) and in the spark's slaves file there is only one slave node--the master itself.I submit like this:

./bin/spark-submit --class .... --master spark://master:7077 --executor-memory 6G --total-executor-cores 8 /path/..jar hdfs://myfile

我不知道执行者是什么,驱动程序是什么...大声笑... 对此感到抱歉.

I don't know what is the executor and what is the driver...lol... sorry about that..

有人帮我吗?

推荐答案

如果Spark Driver失败,则它将解除关联(来自YARN AM).尝试以下方法使其更具容错能力:

If Spark Driver fails, it gets disassociated (from YARN AM). Try the following to make it more fault-tolerant:

    Spark Standalone 上带有--supervise标志的
  • spark-submit集群
  • 纱线上的
  • yarn-cluster模式
  • spark.yarn.driver.memoryOverhead参数,用于增加YARN上驱动程序的内存分配
  • spark-submit with --supervise flag on Spark Standalone cluster
  • yarn-cluster mode on YARN
  • spark.yarn.driver.memoryOverhead parameter for increasing Driver's memory allocation on YARN

注意: 查看全文

登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆