无法运行在DSE 4.5和奴隶文件火花主人丢失 [英] Unable to run spark master in dse 4.5 and slaves file is missing
问题描述
我在DSE 4.5 5节点群集运行了起来。出5个节点1节点hadoop_enabled和spark_enabled但火花主人没有运行。
错误[主题-709] 2014年7月2日11:35:48519 ExternalLogger.java(73行)SparkMaster:异常线程mainorg.jboss.netty.channel .ChannelException:无法绑定到:/54.xxx.xxx.xxx:7077
任何人有这个什么想法?我也试图出口SPARK_LOCAL_IP,但是这也不能正常工作
DSE文件错误地提到,spark-env.sh配置文件是资源/火花/ conf目录/ spark-env.sh。配置目录的实际路径是/ etc / DSE /火花。
奴隶也从CONF目录丢失并运行文件,也可以从bin目录丢失。
我收到以下错误
$ DSE SPARK
欢迎
____ __
/ __ / __ ___ _____ / / __
_ \\ \\ / _ \\ / _`/ __ /'_ /
/ ___ / .__ / \\ _,_ / _ / / _ / \\ _ \\版本0.9.1
/ _ / 使用Scala版本2.10.3(Java的热点(TM)64位服务器VM,爪哇1.7.0_51)
在EX pressions类型,让他们评估。
类型:帮助更多的信息。
创建SparkContext ...
14/07/03十一点37分41秒ERROR远程:远程处理错误:[启动失败] [
akka.remote.RemoteTransportException:启动失败
在akka.remote.Remoting.akka $远程$远程$$ notifyError(Remoting.scala:129)
在akka.remote.Remoting.start(Remoting.scala:194)
在akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
在akka.actor.ActorSystemImpl._start $ lzycompute(ActorSystem.scala:579)
在akka.actor.ActorSystemImpl._start(ActorSystem.scala:577)
在akka.actor.ActorSystemImpl.start(ActorSystem.scala:588)
在akka.actor.ActorSystem $。适用(ActorSystem.scala:111)
在akka.actor.ActorSystem $。适用(ActorSystem.scala:104)
在org.apache.spark.util.AkkaUtils $ .createActorSystem(AkkaUtils.scala:96)
在org.apache.spark.SparkEnv $ .create(SparkEnv.scala:126)
在org.apache.spark.SparkContext&下;初始化方式>(SparkContext.scala:139)
在shark.SharkContext<&初始化GT;(SharkContext.scala:42)。
在shark.SharkEnv $ .initWithSharkContext(SharkEnv.scala:90)
在com.datastax.bdp.spark.SparkILoop.createSparkContext(SparkILoop.scala:41)
。在$ 3号线$读$$ IWC万国表$$ LT&;&初始化GT;(小于控制台>:10)
在$ 3号线$读$$ IWC万国表<&初始化GT;(小于控制台>:32)。
在$ 3号线$读<&初始化GT;(小于控制台>:34)。
。在$ 3号线$ $读<&初始化GT;(小于控制台>:38)。
。在$ 3号线$ $读< clinit>(小于控制台>)
在$ line3中$的eval $&所述;初始化方式>。(小于控制台>:7)
在$ 3号线$ EVAL $< clinit>(小于控制台>)
在$ 3号线$ EVAL $打印。(小于控制台>)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:606)
在org.apache.spark.repl.SparkIMain $ ReadEvalPrint.call(SparkIMain.scala:772)
在org.apache.spark.repl.SparkIMain $ Request.loadAndRun(SparkIMain.scala:1040)
1 org.apache.spark.repl.SparkIMain.loadAndRunReq $(SparkIMain.scala:609)
在org.apache.spark.repl.SparkIMain.inter preT(SparkIMain.scala:640)
在org.apache.spark.repl.SparkIMain.inter preT(SparkIMain.scala:604)
在org.apache.spark.repl.SparkILoop.reallyInter preT $ 1(SparkILoop.scala:793)
在org.apache.spark.repl.SparkILoop.inter pretStartingWith(SparkILoop.scala:838)
在org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:750)
在com.datastax.bdp.spark.SparkILoop $$ anonfun $ initializeSparkContext $ 1.适用(SparkILoop.scala:66)
在com.datastax.bdp.spark.SparkILoop $$ anonfun $ initializeSparkContext $ 1.适用(SparkILoop.scala:66)
在org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:258)
在com.datastax.bdp.spark.SparkILoop.initializeSparkContext(SparkILoop.scala:65)
在com.datastax.bdp.spark.SparkILoop.initializeSpark(SparkILoop.scala:47)
在org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:908)
在org.apache.spark.repl.SparkILoopInit $ class.runThunks(SparkILoopInit.scala:140)
在org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:53)
在org.apache.spark.repl.SparkILoopInit $ class.postInitialization(SparkILoopInit.scala:102)
在org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:53)
在org.apache.spark.repl.SparkILoop $$ anonfun $ $过程$ 1.适用MCZ $ SP(SparkILoop.scala:925)
在org.apache.spark.repl.SparkILoop $$ anonfun $ $过程1.适用(SparkILoop.scala:881)
在org.apache.spark.repl.SparkILoop $$ anonfun $ $过程1.适用(SparkILoop.scala:881)
在scala.tools.nsc.util.ScalaClassLoader $ .savingContextLoader(ScalaClassLoader.scala:135)
在org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:881)
在org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:973)
在com.datastax.bdp.spark.SparkReplMain $。主要(SparkReplMain.scala:22)
在com.datastax.bdp.spark.SparkReplMain.main(SparkReplMain.scala)
org.jboss.netty.channel.ChannelException:引起未能绑定到:/54.xx.xx.xx:0
在org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
在akka.remote.transport.netty.NettyTransport $$ anonfun $ $听1.适用(NettyTransport.scala:391)
在akka.remote.transport.netty.NettyTransport $$ anonfun $ $听1.适用(NettyTransport.scala:388)
在scala.util.Success $$ anonfun $ $图1.适用(Try.scala:206)
在scala.util.Try $。适用(Try.scala:161)
在scala.util.Success.map(Try.scala:206)
在scala.concurrent.Future $$ anonfun $ $图1.适用(Future.scala:235)
在scala.concurrent.Future $$ anonfun $ $图1.适用(Future.scala:235)
在scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.processBatch $ 1(BatchingExecutor.scala:67)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.适用$ MCV $ SP(BatchingExecutor.scala:82)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.适用(BatchingExecutor.scala:59)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.适用(BatchingExecutor.scala:59)
在scala.concurrent.BlockContext $ .withBlockContext(BlockContext.scala:72)
在akka.dispatch.BatchingExecutor $ Batch.run(BatchingExecutor.scala:58)
在akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
在akka.dispatch.ForkJoinExecutorConfigurator $ AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
在scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
在scala.concurrent.forkjoin.ForkJoinPool $ WorkQueue.runTask(ForkJoinPool.java:1339)
在scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
在scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
java.net.BindException:引起无法分配请求地址
在sun.nio.ch.Net.bind0(本机方法)
在sun.nio.ch.Net.bind(Net.java:444)
在sun.nio.ch.Net.bind(Net.java:436)
在sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
在sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
在org.jboss.netty.channel.socket.nio.NioServerBoss $ RegisterTask.run(NioServerBoss.java:193)
在org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
在org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
在org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
在java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:615)
在java.lang.Thread.run(Thread.java:744)
]
org.jboss.netty.channel.ChannelException:无法绑定到:/54.xxx.xxx.xxx.xxx:0
在org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
在akka.remote.transport.netty.NettyTransport $$ anonfun $ $听1.适用(NettyTransport.scala:391)
在akka.remote.transport.netty.NettyTransport $$ anonfun $ $听1.适用(NettyTransport.scala:388)
在scala.util.Success $$ anonfun $ $图1.适用(Try.scala:206)
在scala.util.Try $。适用(Try.scala:161)
在scala.util.Success.map(Try.scala:206)
在scala.concurrent.Future $$ anonfun $ $图1.适用(Future.scala:235)
在scala.concurrent.Future $$ anonfun $ $图1.适用(Future.scala:235)
在scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.processBatch $ 1(BatchingExecutor.scala:67)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.适用$ MCV $ SP(BatchingExecutor.scala:82)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.适用(BatchingExecutor.scala:59)
在akka.dispatch.BatchingExecutor $批次$$ anonfun $运行$ 1.适用(BatchingExecutor.scala:59)
在scala.concurrent.BlockContext $ .withBlockContext(BlockContext.scala:72)
在akka.dispatch.BatchingExecutor $ Batch.run(BatchingExecutor.scala:58)
在akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
在akka.dispatch.ForkJoinExecutorConfigurator $ AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
在scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
在scala.concurrent.forkjoin.ForkJoinPool $ WorkQueue.runTask(ForkJoinPool.java:1339)
在scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
在scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
java.net.BindException:引起无法分配请求地址
在sun.nio.ch.Net.bind0(本机方法)
在sun.nio.ch.Net.bind(Net.java:444)
在sun.nio.ch.Net.bind(Net.java:436)
在sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
在sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
在org.jboss.netty.channel.socket.nio.NioServerBoss $ RegisterTask.run(NioServerBoss.java:193)
在org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
在org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
在org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
在java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:615)
在java.lang.Thread.run(Thread.java:744)在EX pressions类型,让他们评估。
类型:帮助更多的信息。斯卡拉>
spark-env.sh
出口SPARK_HOME =的/ usr /共享/ DSE /火花
出口SPARK_MASTER_IP = 54.xx.xx.xx(公网IP)
出口SPARK_MASTER_PORT = 7077
出口SPARK_MASTER_WEBUI_PORT = 7080
出口SPARK_WORKER_WEBUI_PORT = 7081
出口SPARK_WORKER_MEMORY =4G
出口SPARK_MEM =2克
出口SPARK_REPL_MEM =2克
出口SPARK_CONF_DIR =/等/ DSE /火花
出口SPARK_TMP_DIR =$ SPARK_HOME / tmp目录
出口SPARK_LOG_DIR =$ SPARK_HOME /日志
出口SPARK_LOCAL_IP = 54.xx.xx.xx(公网IP)
出口SPARK_COMMON_OPTS =$ SPARK_COMMON_OPTS -Dspark.kryoserializer.buffer.mb = 10
出口SPARK_MASTER_OPTS =-Dspark.deploy.defaultCores = 1 - Dspark.local.dir = $ SPARK_TMP_DIR /主-Dlog4j.configuration =文件:// $ SPARK_CONF_DIR / log4j- server.properties -Dspark.log.file = $ SPARK_LOG_DIR / MASTER.LOG
出口SPARK_WORKER_OPTS =-Dspark.local.dir = $ SPARK_TMP_DIR /工人-Dlog4j.configuration =文件://$SPARK_CONF_DIR/log4j-server.properties -Dspark.log.file = $ SPARK_LOG_DIR / worker.log
出口SPARK_EXECUTOR_OPTS =-Djava.io.tmpdir = $ SPARK_TMP_DIR /执行人-Dlog4j.configuration =文件://$SPARK_CONF_DIR/log4j-executor.properties
出口SPARK_REPL_OPTS =-Djava.io.tmpdir = $ SPARK_TMP_DIR / REPL / $ USER
出口SPARK_APP_OPTS =-Djava.io.tmpdir = $ SPARK_TMP_DIR /应用/ $ USER #目录中运行,应用程序将包括日志和暂存空间(默认:SPARK_HOME /工作)。
出口SPARK_WORKER_DIR =$ SPARK_HOME /工作
看配置阿卡远程主机:端口为您SparkConf并在reference.conf文件中的任何相关的阿卡配置。这似乎是关系到阿卡遥控器和一个主机的阿卡启动冲突:港口,预计使用,但已经被使用:即ChannelException。有些东西已经使用54.xx.xx.xx:0时,星火的阿卡ActorSystem启动
I have 5 node cluster in DSE 4.5 is running and up. out of 5 nodes 1 node is hadoop_enabled and spark_enabled but spark master is not running.
ERROR [Thread-709] 2014-07-02 11:35:48,519 ExternalLogger.java (line 73) SparkMaster: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: /54.xxx.xxx.xxx:7077
Anyone have any idea on this?? I have also tried to export SPARK_LOCAL_IP but this is also not working
DSE documentation wrongly mentioned that spark-env.sh configuration file is resources/spark/conf/spark-env.sh. actual path of configuration dir is /etc/dse/spark.
Slaves is also missing from conf dir and RUN files is also missing from bin dir. I'm getting below error $ DSE SPARK
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 0.9.1
/_/
Using Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_51)
Type in expressions to have them evaluated.
Type :help for more information.
Creating SparkContext...
14/07/03 11:37:41 ERROR Remoting: Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:129)
at akka.remote.Remoting.start(Remoting.scala:194)
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:579)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:577)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:588)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:111)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:104)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:96)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:126)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:139)
at shark.SharkContext.<init>(SharkContext.scala:42)
at shark.SharkEnv$.initWithSharkContext(SharkEnv.scala:90)
at com.datastax.bdp.spark.SparkILoop.createSparkContext(SparkILoop.scala:41)
at $line3.$read$$iwC$$iwC.<init>(<console>:10)
at $line3.$read$$iwC.<init>(<console>:32)
at $line3.$read.<init>(<console>:34)
at $line3.$read$.<init>(<console>:38)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:772)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1040)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:609)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:640)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:604)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:793)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:838)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:750)
at com.datastax.bdp.spark.SparkILoop$$anonfun$initializeSparkContext$1.apply(SparkILoop.scala:66)
at com.datastax.bdp.spark.SparkILoop$$anonfun$initializeSparkContext$1.apply(SparkILoop.scala:66)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:258)
at com.datastax.bdp.spark.SparkILoop.initializeSparkContext(SparkILoop.scala:65)
at com.datastax.bdp.spark.SparkILoop.initializeSpark(SparkILoop.scala:47)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:908)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:140)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:53)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:102)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:53)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:925)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:881)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:881)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:881)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:973)
at com.datastax.bdp.spark.SparkReplMain$.main(SparkReplMain.scala:22)
at com.datastax.bdp.spark.SparkReplMain.main(SparkReplMain.scala)
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /54.xx.xx.xx:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:391)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:388)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
]
org.jboss.netty.channel.ChannelException: Failed to bind to: /54.xxx.xxx.xxx.xxx:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:391)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:388)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
spark-env.sh
export SPARK_HOME="/usr/share/dse/spark"
export SPARK_MASTER_IP=54.xx.xx.xx (public IP)
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=7080
export SPARK_WORKER_WEBUI_PORT=7081
export SPARK_WORKER_MEMORY="4g"
export SPARK_MEM="2g"
export SPARK_REPL_MEM="2g"
export SPARK_CONF_DIR="/etc/dse/spark"
export SPARK_TMP_DIR="$SPARK_HOME/tmp"
export SPARK_LOG_DIR="$SPARK_HOME/logs"
export SPARK_LOCAL_IP=54.xx.xx.xx (public IP)
export SPARK_COMMON_OPTS="$SPARK_COMMON_OPTS -Dspark.kryoserializer.buffer.mb=10 "
export SPARK_MASTER_OPTS=" -Dspark.deploy.defaultCores=1 - Dspark.local.dir=$SPARK_TMP_DIR/master -Dlog4j.configuration=file://$SPARK_CONF_DIR/log4j- server.properties -Dspark.log.file=$SPARK_LOG_DIR/master.log "
export SPARK_WORKER_OPTS=" -Dspark.local.dir=$SPARK_TMP_DIR/worker -Dlog4j.configuration=file://$SPARK_CONF_DIR/log4j-server.properties -Dspark.log.file=$SPARK_LOG_DIR/worker.log "
export SPARK_EXECUTOR_OPTS=" -Djava.io.tmpdir=$SPARK_TMP_DIR/executor -Dlog4j.configuration=file://$SPARK_CONF_DIR/log4j-executor.properties "
export SPARK_REPL_OPTS=" -Djava.io.tmpdir=$SPARK_TMP_DIR/repl/$USER "
export SPARK_APP_OPTS=" -Djava.io.tmpdir=$SPARK_TMP_DIR/app/$USER "
# Directory to run applications in, which will include both logs and scratch space (default: SPARK_HOME/work).
export SPARK_WORKER_DIR="$SPARK_HOME/work"
Look at the configured Akka Remote host:port for your SparkConf and any related akka configuration in a reference.conf file. This seems like an akka startup conflict related to akka remote and a host:port it expects to use but is already taken: i.e. the ChannelException. Something else is already using 54.xx.xx.xx:0 when Spark's akka ActorSystem starts up.
这篇关于无法运行在DSE 4.5和奴隶文件火花主人丢失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!