在星火运行作业0.9.0抛出错误 [英] Running a Job on Spark 0.9.0 throws error

查看:180
本文介绍了在星火运行作业0.9.0抛出错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经安装了Apache的星火0.9.0集群在那里我想部署一个code这读取HDFS文件。这件作品code的抛出一个警告,并最终作业失败。这里是code

I have a Apache Spark 0.9.0 Cluster installed where I am trying to deploy a code which reads a file from HDFS. This piece of code throws a warning and eventually the job fails. Here is the code

/**
 * running the code would fail 
 * with a warning 
 * Initial job has not accepted any resources; check your cluster UI to ensure that 
 * workers are registered and have sufficient memory
 */

object Main extends App {
    val sconf = new SparkConf()
    .setMaster("spark://labscs1:7077")
    .setAppName("spark scala")
    val sctx = new SparkContext(sconf)
    sctx.parallelize(1 to 100).count
}

下面是警告信息

最初的工作没有接受任何资源;检查你的集群用户界面
  确保工人登记,并有足够的内存

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

如何摆脱这还是我失去了一些配置。

how to get rid of this or am I missing some configurations.

推荐答案

您得到这个时,无论是通过设定 spark.cores.max <核心或(每节点),你要求的RAM量的数目/ code>和 spark.executor.memory RESP'超过什么是可用的。因此,即使没有其他人使用群集,并指定要使用的,说每个节点100GB内存,但你的节点只能支持90GB,那么你将得到该错误消息。

You get this when either the number of cores or amount of RAM (per node) you request via setting spark.cores.max and spark.executor.memory resp' exceeds what is available. Therefore even if no one else is using the cluster, and you specify you want to use, say 100GB RAM per node, but your nodes can only support 90GB, then you will get this error message.

要说句公道话的信息是模糊在这种情况下,如果说你的超过最大会更有帮助。

To be fair the message is vague in this situation, it would be more helpful if it said your exceeding the maximum.

这篇关于在星火运行作业0.9.0抛出错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆