使用EC2和Docker运行Akka集群.节点未在Akka群集中注册 [英] Running an Akka Cluster with EC2 and Docker. Nodes aren't registered in akka-cluster

查看:96
本文介绍了使用EC2和Docker运行Akka集群.节点未在Akka群集中注册的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我实现了以下akka-cluster系统示例,请参见下图:

I implemented the following sample of the akka-cluster system, please see below diagram:

                                 ┌────host_D:3000────┐
                        ┌───────▶│    ....           │  
                        │      ┌────host_C:3000────┐ │  
 ┌────host_A:2551────┐  │ ┌───▶│                   │ │
 │                   │──┘ │  ┌────host_B:3000────┐ │ │  
 │┌─────────────────┐│────┘  │┌─────────────────┐│ │ │  
 ││   MasterActor   ││──────▶││   WorkerActor   ││ │─┘
 │└─────────────────┘│       │└─────────────────┘│─┘
 └───────────────────┘       └───────────────────┘

MasterActorWorkerActor在单独的sbt-modules中实现,并从使用scalatra-servlet开始.因此,当部署特定的sbt模块时,将在ServletContextListener中创建一个actor系统.

The MasterActor and WorkerActor are implemented in separated sbt-modules and started with using scalatra-servlets. So an actor system is created in ServletContextListener when a particular sbt module is deployed.

MasterActorWorkerActor已订阅集群事件(例如MemberJoin/Up/etc). WorkerActor可以在不同的节点上缩放.并且对端口使用了以下限制:

The MasterActor and WorkerActor are subscribed to the cluster events (such as MemberJoin/Up/etc). The WorkerActor can be scaled on the different nodes. And the following restrictions for the ports are used:

  • 2551-用于MasterActor的群集节点
  • 3000-用于WorkerActor的群集节点
  • 2551 - for the MasterActor's cluster node
  • 3000 - for the WorkerActor's cluster node

我只需要关注集群事件.因为本主题中省略了以下详细信息:

I need to focus on cluster-events only. Because the following details were omitted in this topic:

  • 检测种子节点(由 EC2客户端计算)
  • 将消息从MasterActor发送给工作人员(它们是由某些负载均衡器发送的).
  • detecting of seed-nodes (they calculated by EC2 client)
  • sending messages from MasterActor to workers (they are sent by some load balancer).

这可以在我的本地计算机上成功运行(以及在VirtualBox下使用虚拟机).但是我在EC2/docker上部署时遇到了问题. 例如,我使用两个具有以下IP的EC2主机:10.x.x.A10.x.x.B. my项目可以通过以下方式部署在EC2中:

This works successfully on my local machine (and with using virtual machines under VirtualBox). But I've faced with the issues when I deployed on EC2/docker. For example, I use two EC2 hosts with the following IP: 10.x.x.A and 10.x.x.B. The my project can be deployed in EC2 in the following ways:

    位于10.x.x.A
  1. MasterActor模块和位于10.x.x.B
  2. WorkerActor模块
  3. 反之亦然
  4. 这些模块部署在同一主机中.
  1. MasterActor module at 10.x.x.A and WorkerActor module at 10.x.x.B
  2. vice versa
  3. the modules are deployed in the same host.

当模块部署在不同的主机中时,我考虑方法#1.由于我不知道哪个IP用于MasterActor,因此我为每个节点保留一个种子节点.根据以上端口限制.请参见下图,该图说明了我的基础架构和akka集群的配置.

I consider the way#1 when modules are deployed in the different hosts. Since I don't known which IP will be used for the MasterActor then I reserve a seed-nodes for each node. According to the above restrictions for the ports. Please see below diagram which illustrates the my infrastructure and akka-cluster configuration.

┌──[ec2@10.x.x.A]─────────────────────────────────────────────┐
│                                                             │
│  > ifconfig                                                 │ 
│    eth0 10.x.x.A                                            │
│    docker0 172.17.0.1                                       │ 
│                                                             │ 
│                                                             │ 
│  ┌─────[docker:172.17.x.d1]──────────────────────────────┐  │
│  │ > ifconfig                       ┌─────────────────┐  │  │
│  │   eth0    172.17.x.d1            │   MasterActor   │  │  │
│  │                                  └─────────────────┘  │  │
│  │ ClusterSystem {                                       │  │
│  │   akka.remote.netty.tcp.hostname      = "10.x.x.A"    │  │
│  │   akka.remote.netty.tcp.port          = "2551"        │  │
│  │   akka.cluster.roles                  = ["master"]    │  │
│  │   akka.remote.netty.tcp.bind-hostname = "172.17.x.d1" │  │
│  │   akka.remote.netty.tcp.bind-port     = "2552"        │  │
│  │   akka.cluster.seed-nodes = [                         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.A:2551",         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.A:3000",         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.B:2551",         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.B:3000" ]        │  │
│  │ }                                                     │  │
│  │                                                       │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘   


┌──[ec2@10.x.x.B]─────────────────────────────────────────────┐
│                                                             │
│  > ifconfig                                                 │ 
│    eth0 10.x.x.B                                            │
│    docker0 172.17.0.1                                       │ 
│                                                             │ 
│                                                             │ 
│  ┌─────[docker:172.17.x.d2]──────────────────────────────┐  │
│  │ > ifconfig                       ┌─────────────────┐  │  │
│  │   eth0    172.17.x.d2            │   WorkerActor   │  │  │
│  │                                  └─────────────────┘  │  │
│  │ ClusterSystem {                                       │  │
│  │   akka.remote.netty.tcp.hostname      = "10.x.x.B"    │  │
│  │   akka.remote.netty.tcp.port          = "3000"        │  │
│  │   akka.cluster.roles                  = ["worker"]    │  │
│  │   akka.remote.netty.tcp.bind-hostname = "172.17.x.d2" │  │
│  │   akka.remote.netty.tcp.bind-port     = "2552"        │  │
│  │   akka.cluster.seed-nodes = [                         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.A:2551",         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.A:3000",         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.B:2551",         │  │
│  │     "akka.tcp://ClusterSystem@10.x.x.B:3000" ]        │  │
│  │ }                                                     │  │
│  │                                                       │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘   

在每个EC2实例中,我说明了ifconfig命令的结果.每个泊坞窗都显示了相同的内容. 对于akka群集配置,我使用了

Into the each EC2 instance I illustrated the result of ifconfig command. The same was illustrated into each docker. For akka-cluster configuration I used this manual:

主要问题:MasterActor节点已启动并成功在akka集群中注册.但是WorkerActor已启动,但未在akka群集中注册.

The main issue: the MasterActor node is started and registered itself in the akka-cluster successfully. But the WorkerActoris started but doesn't registered in the akka-cluster.

主要问题:这是我的集群系统的正确配置吗?有什么错误吗?

The main questions: is this a correct configuration for the my cluster system? Are there any mistakes?

我也发现了一些与主要问题有关的问题:

Also I've found some issue which can be connected with the main issue:

  1. 不能从10.x.x.A ping到10.x.x.B,反之亦然
  1. Can't ping from 10.x.x.A to 10.x.x.B and vice versa

推荐答案

问题与主机和端口的可用性有关:

The issue was connected with hosts and ports availability:

Can't ping from 10.x.x.A to 10.x.x.B and vice versa

现在群集成功运行.

这篇关于使用EC2和Docker运行Akka集群.节点未在Akka群集中注册的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆