Spark-如何在本地运行独立集群 [英] Spark - How to run a standalone cluster locally
问题描述
是否可以在一台机器上本地运行 Spark独立集群(与仅在本地开发作业(即local[*]
)基本不同)?
Is there the possibility to run the Spark standalone cluster locally on just one machine (which is basically different from just developing jobs locally (i.e., local[*]
))?.
到目前为止,我正在运行2个不同的VM来构建集群,如果我可以在运行 三个不同的JVM的同一台计算机上运行一个独立的集群怎么办?
So far I am running 2 different VMs to build a cluster, what if I could run a standalone cluster on the very same machine, having for instance three different JVMs running?
像具有多个环回地址的方法可以解决问题吗?
Could something like having multiple loopback addresses do the trick?
推荐答案
是的,您可以做到,启动一个主节点和一个工作节点,您就很好了
yes you can do it, launch one master and one worker node and you are good to go
启动大师
./sbin/start-master.sh
上班族
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://localhost:7077 -c 1 -m 512M
运行SparkPi示例
run SparkPi example
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://localhost:7077 lib/spark-examples-1.2.1-hadoop2.4.0.jar
这篇关于Spark-如何在本地运行独立集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!