TaskSchedulerImpl:初始工作没有接受任何资源; [英] TaskSchedulerImpl: Initial job has not accepted any resources;

查看:173
本文介绍了TaskSchedulerImpl:初始工作没有接受任何资源;的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

下面就是我要做的。

我创建了我创建了一个Java程序来获得一个表(卡珊德拉数据库表)。

的计数DataStax企业集群的两个节点,在上面

该计划是建立在Eclipse这实际上是从Windows中。

目前,从它在运行时,出现以下错误失败的窗口运行此程序的时间:


  

最初的工作没有接受任何资源;检查你的集群用户界面,以确保工人登记,并有足够的内存


同样的code已编制和放大器;对成功没有任何问题的集群上运行。可能是什么,为什么我得到上述错误的原因?

code:

 进口org.apache.spark.SparkConf;进口org.apache.spark.SparkContext;进口org.apache.spark.api.java.JavaSparkContext;
进口org.apache.spark.sql.SchemaRDD;
进口org.apache.spark.sql.cassandra.CassandraSQLContext;
进口com.datastax.bdp.spark.DseSparkConfHelper;公共类SparkProject {    公共静态无效的主要(字串[] args){        SparkConf CONF = DseSparkConfHelper.enrichSparkConf(新SparkConf()).setMaster(\"spark://10.63.24.14X:7077\").setAppName(\"DatastaxTests\").set(\"spark.cassandra.connection.host\",\"10.63.24.14x\").set(\"spark.executor.memory\", 2048米)设置(spark.driver.memory,1024米)集(spark.local.ip,10.63.24.14X)。;        JavaSparkContext SC =新JavaSparkContext(CONF);        CassandraSQLContext cassandraContext =新CassandraSQLContext(sc.sc());
        SchemaRDD员工= cassandraContext.sql(SELECT * FROM portware_ants.orders);        //employees.registerTempTable(\"employees);
        // SchemaRDD经理= cassandraContext.sql(SELECT符号FROM雇员);
        的System.out.println(employees.count());        sc.stop();
    }
}


解决方案

我的问题比我的奴隶都可以,我是分配太多内存。尝试减少火花的内存大小提交。类似如下:

 〜/火花1.5.0 /斌/火花提交--master火花://我-PC:7077 --total执行人-芯2 --executor内存512米

我的〜/火花1.5.0 / conf目录/ spark-env.sh 之中:

  SPARK_WORKER_INSTANCES = 4
SPARK_WORKER_MEMORY =千米
SPARK_WORKER_CORES = 2

Here is what I am trying to do.

I have created two nodes of DataStax enterprise cluster,on top of which I have created a java program to get the count of one table (Cassandra database table).

This program was built in eclipse which is actually from a windows box.

At the time of running this program from windows it's failing with the following error at runtime:

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

The same code has been compiled & run successfully on those clusters without any issue. What could be the reason why am getting above error?

Code:

import org.apache.spark.SparkConf;

import org.apache.spark.SparkContext;

import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SchemaRDD;
import org.apache.spark.sql.cassandra.CassandraSQLContext;
import com.datastax.bdp.spark.DseSparkConfHelper;

public class SparkProject  {

    public static void main(String[] args) {

        SparkConf conf = DseSparkConfHelper.enrichSparkConf(new SparkConf()).setMaster("spark://10.63.24.14X:7077").setAppName("DatastaxTests").set("spark.cassandra.connection.host","10.63.24.14x").set("spark.executor.memory", "2048m").set("spark.driver.memory", "1024m").set("spark.local.ip","10.63.24.14X");

        JavaSparkContext sc = new JavaSparkContext(conf);

        CassandraSQLContext cassandraContext = new CassandraSQLContext(sc.sc());
        SchemaRDD employees = cassandraContext.sql("SELECT * FROM portware_ants.orders");

        //employees.registerTempTable("employees");
        //SchemaRDD managers = cassandraContext.sql("SELECT symbol FROM employees");
        System.out.println(employees.count());

        sc.stop();
    }
}

解决方案

My problem was that I was assigning too much memory than my slaves had available. Try reducing the memory size of the spark submit. Something like the following:

~/spark-1.5.0/bin/spark-submit --master spark://my-pc:7077 --total-executor-cores 2 --executor-memory 512m

with my ~/spark-1.5.0/conf/spark-env.sh being:

SPARK_WORKER_INSTANCES=4
SPARK_WORKER_MEMORY=1000m
SPARK_WORKER_CORES=2

这篇关于TaskSchedulerImpl:初始工作没有接受任何资源;的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆