反应迟钝星火硕士 [英] Unresponsive Spark Master
问题描述
我想在Mac上运行在独立模式下一个简单的星火应用程序。
I'm trying to run a simple Spark app in Standalone mode on Mac.
我管理运行 ./ sbin目录/ start-master.sh
来启动主服务器和工人。
I manage to run ./sbin/start-master.sh
to start the master server and worker.
./斌/火花壳--master火花://MacBook-Pro.local:7077
也能工作,我可以看到它在主服务器上运行的应用程序列表WebUI中
./bin/spark-shell --master spark://MacBook-Pro.local:7077
also works and I can see it in running application list in Master WebUI
现在我想要写一个简单的火花应用程序。
Now I'm trying to write a simple spark app.
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.SparkContext._
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Simple Application")
.setMaster("spark://MacBook-Pro.local:7077")
val sc = new SparkContext(conf)
val logFile = "README.md"
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}
运行这个简单的应用程序给我的错误信息师父没有响应
Running this simple app gives me error message that Master is unresponsive
15/02/15 09:47:47 INFO AppClient$ClientActor: Connecting to master spark://MacBook-Pro.local:7077...
15/02/15 09:47:48 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@MacBook-Pro.local:7077] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/02/15 09:48:07 INFO AppClient$ClientActor: Connecting to master spark://MacBook-Pro.local:7077...
15/02/15 09:48:07 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@MacBook-Pro.local:7077] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/02/15 09:48:27 INFO AppClient$ClientActor: Connecting to master spark://MacBook-Pro.local:7077...
15/02/15 09:48:27 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@MacBook-Pro.local:7077] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/02/15 09:48:47 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/02/15 09:48:47 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.
15/02/15 09:48:47 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
任何想法是什么问题?
谢谢
Any idea what is the problem? Thanks
推荐答案
您可以致电时火花提交
,或(如你在这里所做的设置主)由通过 SparkConf
明确设置它。尝试下面的星火配置文档的例子,并设置作为主如下:
You can either set the master when calling spark-submit
, or (as you've done here) by explicitly setting it via the SparkConf
. Try following the example in the Spark Configuration docs, and setting the master as follows:
VAL的conf =新SparkConf()。setMaster(本地[2])
在同一页面(解释括号中的数字后面本地
):请注意,我们与当地的[2],这意味着两个线程运行 - 这重新presents最小的并行性,可以帮助检测bug,只有当我们在分布式环境中运行存在。
From the same page (explaining the number in brackets that follows local
): "Note that we run with local[2], meaning two threads - which represents "minimal" parallelism, which can help detect bugs that only exist when we run in a distributed context."
这篇关于反应迟钝星火硕士的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!