连接被拒绝在本地运行SparkPi [英] Connection Refused When Running SparkPi Locally

查看:547
本文介绍了连接被拒绝在本地运行SparkPi的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图运行SparkPi例如一个简单的执行。我开始掌握和一个工人,然后执行我的地方集群的工作,但最终得到错误的所有结尾的序列

I'm trying to run a simple execution of the SparkPi example.  I started the master and one worker, then executed the job on my local "cluster", but end up getting a sequence of errors all ending with

Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: /127.0.0.1:39398

我最初尝试运行我的主人和工人无需配置,但结束了同样的错误。我试图改变为127.0.0.1来测试它也许只是一个防火墙的问题,因为服务器是从外界锁定。

I originally tried running my master and worker without configuration but ended up with the same error.  I tried to change to 127.0.0.1 to test if it was maybe just a firewall issue since the server is locked down from the outside world.

我的的conf / spark-conf.sh 包含以下内容:

export SPARK_MASTER_IP=127.0.0.1

下面是订单和命令我运行:

Here is the order and commands I run:

1) sbin目录/ start-master.sh (启动主机)

2)斌/火花级org.apache.spark.deploy.worker.Worker火花://127.0.0.1:7077 --ip 127.0.0.1 --port 1111 (在同一台机器上不同的会话启动从)

2) bin/spark-class org.apache.spark.deploy.worker.Worker spark://127.0.0.1:7077 --ip 127.0.0.1 --port 1111 (in a different session on the same machine to start the slave)

3)斌/运行例子org.apache.spark.examples.SparkPi火花://127.0.0.1:7077 (在同一台机器上不同的会话启动作业)

3) bin/run-example org.apache.spark.examples.SparkPi spark://127.0.0.1:7077 (in a different session on the same machine to start the job)

我觉得很难相信,我锁定了足够的本地运行会导致问题。

I find it hard to believe that I'm locked down enough that running locally would cause problems.

推荐答案

它看起来像你不应该设置SPARK_MASTER_IP到回送地址127.0.0.1。工作节点将不能使用回送地址连接到主节点。

It looks like you should not set SPARK_MASTER_IP to a loopback address 127.0.0.1. The worker node will not be able to connect to the MASTER node using a loopback address.

您应当将其设置为一个有效的本地IP地址(如192.168.0.2)中的conf / spark-env.sh并且在主机和工作节点的conf /从机配置文件中添加工人的IP。

You shall set it to a valid local ip address(e.g., 192.168.0.2) in conf/spark-env.sh and add the worker's IP in conf/slaves configuration file in both MASTER and the WORKER node.

然后你可以使用sbin目录/ start-all.sh启动群集。

Then you can use sbin/start-all.sh to start the cluster.

然后运行斌/运行例子org.apache.spark.examples.SparkPi

And then run "bin/run-example org.apache.spark.examples.SparkPi"

这篇关于连接被拒绝在本地运行SparkPi的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆