如何在多台主机之间建立码头覆盖网络? [英] how to create docker overlay network between multi hosts?

查看:198
本文介绍了如何在多台主机之间建立码头覆盖网络?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在尝试在两台主机之间创建一个覆盖网络,没有成功。我收到错误消息:

  mavungu @ mavungu-Aspire-5250:〜$ sudo docker -H tcp://192.168 .0.18:2380 network create -d overlay myapp 
守护进程错误响应:500内部服务器错误:无法解析池请求的地址空间GlobalDefault池subpool:找不到地址空间GlobalDefault(很可能支持数据存储没有配置)

mavungu @ mavungu-Aspire-5250:〜$ sudo docker network create -d overlay myapp
[sudo] mavungu的密码:
错误响应来自守护进程:无法解析池请求的地址空间GlobalDefault池subpool:找不到地址空间GlobalDefault(很有可能的后备数据存储没有配置)

我的环境详细信息:

  mavungu @ mavungu-Aspire-5250 :〜$ sudo docker信息容器:1 
图像:364服务器版本:1.9.1存储驱动程序:aufs根目录:
/ var / lib / docker / aufs备份文件系统:extfs D irs:368 Dirperm1
支持:true执行驱动程序:native-0.2日志驱动程序:
json-file内核版本:3.19.0-26通用操作系统:Ubuntu
15.04 CPU:2总计内存:3.593 GiB名称:mavungu-Aspire-5250注册表:https://index.docker.io/v1/警告:没有交换限制支持

我有一个群集工作与领事作为发现机制:

  mavungu @ mavungu-Aspire-5250:〜$ sudo docker -H tcp://192.168.0.18:2380 info 

容器:4
图片:51
角色:primary
策略:传播
过滤器:健康,端口,依赖关系,亲和力,约束
节点:2
mavungu-Aspire-5250:192.168.0.36:2375
└容器:1
└保留CPU:0/2
└保留内存:0 B / 3.773 GiB
└标签:executiondriver = native-0.2,kernelversion = 3.19.0-26-generic,operatingsystem = Ubuntu 15.04 ,storagedriver = aufs
mavungu-HP-Pavilion-15-Notebook-PC:192.168.0.18:2375
容器:3
└保留CPU:0/4
└保留内存:0 B / 3.942 GiB
└标签:executiondriver = native-0.2,kernelversion = 4.2.0-19-generic, operatingsystem = Ubuntu 15.10,storageagedriver = aufs
CPU:6
总内存:7.715 GiB
名称:bb47f4e57436

我的领事可在 192.168.0.18:8500 下获得,它与群集集合很好。



我想要能够在两台主机之间创建一个覆盖网络。我已经在两台主机上配置了这个附加设置的Docker引擎:

  DOCKER_OPTS = -  D  - 集群商店://192.168.0.18:8500 --cluster-advertise = 192.168.0.18:0

DOCKER_OPTS = - D --cluster-store-consul://192.168.0.18:8500 - cluster-advertise = 192.168.0.36:0

我不得不停止并重新启动引擎并重置群集群集
创建覆盖网络失败后,我将--cluster-advertise设置更改为:

  DOCKER_OPTS = -  D --cluster-store-consul://192.168.0.18:8500 --cluster-advertise = 192.168.0.18:2375

DOCKER_OPTS = - D - cluster-store-consul://192.168.0.18:8500 --cluster-advertise = 192.168.0.36:2375



但仍然没有工作。我不知道应该为 - cluster-advertise = 设置什么ip:端口。文件,讨论和教程在这个广告内容上不清楚。



这里有一些我错了。请帮助。

解决方案

执行 docker run 命令时,肯定添加 - net myapp
这是一个完整的分步教程(在线版本):



如何在具有多主机网络的群集上部署群集



TL; DR:分步教程,使用 Swarm 部署多主机网络。我想尽快把这个教程放在网上,所以我甚至没有花时间的介绍。 markdown文件位于我网站的github 。随时适应和分享,根据知识共享署名4.0国际许可证许可



先决条件



环境



群组经理和 consul 主机将在名为bugs20的机器上运行。其他节点,错误19,错误18,错误17和错误16将群组代理和 consul 会员。



在我们开始之前

Consul 用于多主机网络,任何其他键值存储都可以使用 - 请注意,引擎支持 Consul Etcd和ZooKeeper
令牌(或静态文件)用于群组代理发现。令牌使用REST API,首选静态文件。



网络

网络范围为192.168.196.0/25。名为bugsN的主机的IP地址为192.168.196.N。



Docker守护程序

所有节点正在运行docker守护程序,如下所示: / p>

  / usr / bin / docker daemon -H tcp://0.0.0.0:2375 -H unix:/// var / run / docker.sock --cluster-advertise eth0:2375  -  cluster-store consul://127.0.0.1:8500 



选项详细信息:

  -H tcp://0.0.0.0:2375 
pre>

将守护进程绑定到界面,以使其成为群组集群。一个IP地址显然是具体的,如果你有几个网卡,它是一个更好的解决方案。

   -  cluster-advertise eth0: 2375 

定义Docker守护程序应该用于自己的广告的接口和端口。

   -  cluster-store consul://127.0.0.1:8500 

定义分布式存储后端的URL。在我们的例子中,我们使用领事馆,虽然还有其他的发现工具可以使用,如果你想想起你应该感兴趣的话阅读此服务发现比较



由于领事馆的分发,URL可以是本地的(请记住, swarm 代理商也是领事馆成员),这更灵活,因为你不不得不指定领事馆主站点的IP地址,并在docker守护程序启动后选择。



使用的别名

在以下命令中使用这两个别名:

  alias ldocker ='docker -H tcp://0.0.0.0:2375'
alias swarm-docker ='docker -H tcp://0.0.0.0:5732'#仅在群集管理器上使用

确保在 $ PATH 。一旦您进入目录,只需输入 export PATH = $ PATH:$(pwd)将会做的。



还假设变量 $ IP 已被正确设置和导出。可以这样做,感谢 .bashrc .zshrc 或其他类似的东西:

  export IP = $(ifconfig | grep192.168.196。| cut -d:-f 2 | cut -d -f 1)



领事馆



开始部署所有领事馆成员和需要的主人。



领事馆主(bugs20)



  consul agent -server -bootstrap-expect 1 -data-dir / tmp / consul -node = master -bind = $ IP -client $ IP 



选项详细信息:

  agent -server 

启动 consul 代理作为服务器。

  -bootstrap-expect 1 

我们期望只有一个主人。

  -node = master20 

领事服务器/主机将被命名为master20。

  -bind = 192.168.196.20 

指定应绑定的IP地址。如果您只有一个NIC,则为可选项。

  -client = 192.168.196.20 
/ pre>

指定要绑定服务器的RPC IP地址。默认情况下是localhost。请注意,我不确定此选项的必要性,并且此权力为本地请求(例如 -rpc-addr = 192.168.196.20:8400 >领事成员-rpc-addr = 192.168.196.20:8400 或 consul join -rpc-addr = 192.168.196.20:8400 192.168.196.9 to加入具有IP地址 192.168.196.9 领事馆成员



领事成员(错误{16..19})



  consul agent -data- dir / tmp / consul -node = $ HOSTNAME -bind = 192.168.196.N 

要使用 tmux 或类似的选项:setw synchronize-panes on 所以这一个命令: consul -d代理-data-dir / tmp / consul -node = $ HOST -bind = $ IP 启动所有 consul 会员。



加入领事成员



  consul join -rpc-addr = 192.168.196.20:8400 192.168.196.16 
consul join -rpc-addr = 192.168.196.20:8400 192.168.196.17
consul join -rpc-addr = 192.168.196.20:8400 192.168.196.18
consul join -rpc-addr = 192.168.196.20:8400 192.168。 196.19

也可以使用一行命令。如果你使用zsh,那么 consul join -rpc-addr = 192.168.196.20:8400 192.168.196。{16..19} 够了,还是一个foor循环: $ code> for $ in $(seq 16 1 19); do consul join -rpc-addr = 192.168.196.20:8400 192.168.196。$ i; done 。您可以通过以下命令验证您的成员是否为您的领事馆部署的一部分:

  consul成员-rpc-addr = 192.168.196.20:8400 
节点地址状态类型构建协议DC
master20 192.168.196.20:8301活着的服务器0.5.2 2 dc1
bugs19 192.168.196.19:8301 alive client 0.5.2 2 dc1
bugs18 192.168.196.18:8301 alive client 0.5.2 2 dc1
bugs17 192.168.196.17:8301 alive client 0.5.2 2 dc1
bugs16 192.168.196.16:8301 alive client 0.5.2 2 dc1

Consul 成员和主人都被部署和工作。现在重点放在码头和群组上。






群组



在下面,使用两种不同的方法详细描述了swarm管理器和群组成员发现:令牌和静态文件。令牌使用Docker Hub的托管发现服务,而静态文件只是本地的,不使用网络(也不是任何服务器)。静态文件解决方案应该是首选(实际上比较简单)。



[静态文件]在加入群组成员时启动群组管理器



创建名为 /tmp/cluster.disco 的文件,内容为 swarm_agent_ip:2375

  cat /tmp/cluster.disco 
192.168.196.16:2375
192.168 .196.17:2375
192.168.196.18:2375
192.168.196.19:2375

然后刚刚启动群组管理器,如下所示:

  ldocker运行-v /tmp/cluster.disco:/tmp/cluster。 disco -d -p 5732:2375群组管理文件:///tmp/cluster.disco 

你完成了!



[token]创建并启动群组管理器



在群主上(bugs20 ),创建一个群:

  ldocker run --rm swarm create> swarm_id 

这将创建一个swarm,并将文件保存在文件 swarm_id 当前目录。一旦创建,群组管理器需要作为守护进程运行:

  ldocker run -d -p 5732:2375 swarm manage token ://`cat swarm_id` 

要验证是否启动,您可以运行:

  ldocker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d28238445532 swarm/ swarm manage token:5秒前Up 4秒0.0.0.0:5732->2375/tcp cranky_liskov



[token]加入群组成员进入群集



然后,群组管理器需要一些群代理才能加入。

 code> ldocker运行swarm join --addr = 192.168.196.16:2375令牌://`cat swarm_id` 
ldocker运行swarm join --addr = 192.168.196.17:2375令牌://`cat swarm_id `
ldocker run swarm join --addr = 192.168.196.18:2375 token://`cat swarm_id`
ldocker run swarm jo在--addr = 192.168.196.19:2375令牌://`cat swarm_id`

std [in | out]将忙,这​​些命令需要在不同的终端上运行。在连接之前添加 -d 应该解决这个问题,并启用for-loop用于连接。 p>

群组成员加入后:

  auzias @ bugs20:〜 $ ldocker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d1de6e4ee3f​​c swarm/ swarm join --addr = 15秒前Up 4秒2375 / tcp fervent_lichterman
338572b87ce9 swarm/ swarm加入--addr = 16秒前上传4秒2375 / tcp mad_ramanujan
7083e4d6c7ea swarm/ swarm join --addr = 17秒前Up 5秒2375 / tcp naughty_sammet
0c5abc6075da swarm / swarm join --addr = 18秒前Up 6秒2375 / tcp gloomy_cray
ab746399f106 swar m/ swarm manage token:25秒ago Up 23 seconds 0.0.0.0:5732->2375/tcp ecstatic_shockley



发现群组成员之后



要验证成员是否被很好地发现,您可以执行 swarm-docker info

  auzias @ bugs20:〜$ swarm-docker info 
容器:4
图片:4
角色:主要
策略:传播
过滤器:健康,端口,依赖关系,亲和力,约束
节点:4
bugs16:192.168.196.16 :2375
└容器:0
└保留CPU:0/12
└保留内存:0 B / 49.62 GiB
└标签:executiondriver = native-0.2,kernelversion = 3.16 .0-4-amd64,operatingsystem = Debian GNU / Linux 8(jessie),storagedriver = aufs
bugs17:192.168.196.17:2375
└容器:0
└保留CPU:0 / 12
└保留内存:0 B / 49.62 GiB
└标签:executiondriver = native-0.2,kernelversio n = 3.16.0-4-amd64,operatingsystem = Debian GNU / Linux 8(jessie),storagedriver = aufs
bugs18:192.168.196.18:2375
└容器:0
└保留的CPU :0/12
└保留内存:0 B / 49.62 GiB
└标签:executiondriver = native-0.2,kernelversion = 3.16.0-4-amd64,operatingsystem = Debian GNU / Linux 8(jessie) ,storagedriver = aufs
bugs19:192.168.196.19:2375
└容器:4
└保留CPU:0/12
└保留内存:0 B / 49.62 GiB
└标签:executiondriver = native-0.2,kernelversion = 3.16.0-4-amd64,operatingsystem = Debian GNU / Linux 8(jessie),storagedriver = aufs
CPU:48
总内存:198.5 GiB
名称:ab746399f106

此时,swarm已部署,所有运行的容器都将运行不同的节点。通过执行几个:

  auzias @ bugs20:〜$ swarm-docker运行--rm -it ubuntu bash 

然后a:

  auzias @ bugs20:〜$ swarm-docker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45b19d76d38e ubuntubash6秒前Up 5秒bugs18 / boring_mccarthy
53e87693606e ubuntubash 6秒以前Up 5秒bugs16 / amazing_colden
b18081f26a35 ubuntubash6秒前Up 4秒bugs17 / small_newton
f582d4af4444 ubuntubash7秒前Up 4秒bugs18 / naughty_banach
b3d689d​​749f9 ubuntubash7秒ds之前Up 4秒bugs17 / pensive_keller
f9e86f609ffa ubuntubash7秒前Up 5秒bugs16 / pensive_cray
b53a46c01783 ubuntubash7秒前上传4秒bugs18 / reverent_ritchie
78896a73191b ubuntu bash7秒前上传5秒bugs17 / gloomy_bell
a991d887a894 ubuntubash7秒前上传5秒bugs16 / angry_swanson
a43122662e92 ubuntubash7秒前上传5秒bugs17 / pensive_kowalevski
68d874bc19f9 ubuntubash7秒前上传5秒bugs16 / modest_payne
e79b3307f6e6 ubuntu bash7秒前上传5秒bugs18 / stoic_wescoff
caac9466d86f ubuntubash7秒前上传5秒bugs17 / goofy_snyder
7748d01d34ee ubuntubash7秒前Up 5秒bugs16 / fervent_einstein
99da2a91a925 ubuntubash7秒前上传5秒bugs18 / modest_goodall
cd308099faac ubuntubash7秒前上传6秒bugs19 / furious_ritchie

如图所示,这些容器是通过错误{16 ... 19}传播的。






多主机网络



需要网络覆盖,因此所有容器都可以插入此叠加层。要创建此网络叠加层,请执行:

  auzias @ bugs20:〜$ swarm-docker network create -d overlay net 
auzias @ bugs20:〜$ swarm-docker network ls | grepnet
c96760503d06 net overlay

Andvoilà!



创建此叠加层后,添加 - net net 到命令 swarm-docker运行--rm -it ubuntu bash ,所有的容器将能够在本地进行通信,就像它们在同一个LAN上一样。默认网络是10.0.0.0/24。



启用组播



默认覆盖不支持组播。另一个驱动程序需要能够使用多播。码头插件编织网确实支持多播。



要使用此驱动程序,一旦安装,您将需要在所有Swarm代理程序和Swarm管理器上运行 $ weave launch 。那么你需要将编织在一起,这是通过运行 $ weave connect $ SWARM_MANAGER_IP 完成的。这不是Swarm经理的IP地址,但是这样做更干净(或使用Swarm代理程序以外的其他节点)。



此时编织群集被部署,但是没有编织网络。运行 $ swarm-docker network create --driver weave-net 将创建名为 weave-net 的编织网络。使用 - net weave-net 启动容器将使他们能够共享相同的LAN并使用多播。启动这种容器的完整命令的示例是: $ swarm-docker run --rm -it --privileged --net = weave-net ubuntu bash


I have been trying to create an overlay network between two hosts with no success. I keep getting the error message:

mavungu@mavungu-Aspire-5250:~$ sudo docker -H tcp://192.168.0.18:2380 network create -d overlay myapp
Error response from daemon: 500 Internal Server Error: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault (most likely the backing datastore is not configured)

mavungu@mavungu-Aspire-5250:~$ sudo docker network create -d overlay myapp
[sudo] password for mavungu:
Error response from daemon: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault (most likely the backing datastore is not configured)

My environment details:

mavungu@mavungu-Aspire-5250:~$ sudo docker info Containers: 1
Images: 364 Server Version: 1.9.1 Storage Driver: aufs Root Dir:
/var/lib/docker/aufs Backing Filesystem: extfs Dirs: 368 Dirperm1
Supported: true Execution Driver: native-0.2 Logging Driver:
json-file Kernel Version: 3.19.0-26-generic Operating System: Ubuntu
15.04 CPUs: 2 Total Memory: 3.593 GiB Name: mavungu-Aspire-5250 Registry: https://index.docker.io/v1/ WARNING: No swap limit support

I have a swarm cluster working well with consul as the discovery mechanism:

mavungu@mavungu-Aspire-5250:~$ sudo docker -H tcp://192.168.0.18:2380 info 

Containers: 4 
Images: 51 
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
mavungu-Aspire-5250: 192.168.0.36:2375
└ Containers: 1
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 3.773 GiB
└ Labels: executiondriver=native-0.2, kernelversion=3.19.0-26-generic, operatingsystem=Ubuntu 15.04, storagedriver=aufs
mavungu-HP-Pavilion-15-Notebook-PC: 192.168.0.18:2375
└ Containers: 3
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 3.942 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.2.0-19-generic, operatingsystem=Ubuntu 15.10, storagedriver=aufs
CPUs: 6
Total Memory: 7.715 GiB
Name: bb47f4e57436

My consul is available at 192.168.0.18:8500 and it works well with the swarm cluster.

I would like to be able to create an overlay network across the two hosts. I have configured the docker engines on both hosts with this additional settings:

DOCKER_OPTS="-D --cluster-store-consul://192.168.0.18:8500 --cluster-advertise=192.168.0.18:0"

DOCKER_OPTS="-D --cluster-store-consul://192.168.0.18:8500 --cluster-advertise=192.168.0.36:0"

I had to stop and restart the engines and reset the swarm cluster... After failing to create the overlay network, I changed the --cluster-advertise setting to this :

DOCKER_OPTS="-D --cluster-store-consul://192.168.0.18:8500 --cluster-advertise=192.168.0.18:2375"

DOCKER_OPTS="-D --cluster-store-consul://192.168.0.18:8500 --cluster-advertise=192.168.0.36:2375"

But still it did not work. I am not sure of what ip:port should be set for the --cluster-advertise= . Docs, discussions and tutorials are not clear on this advertise thing.

There is something that I am getting wrong here. Please help.

解决方案

When you execute the docker runcommand, be sure to add --net myapp. Here is a full step-by-step tutorial (online version):

How to deploy swarm on a cluster with multi-hosts network

TL;DR: step-by-step tutorial to deploy a multi-hosts network using Swarm. I wanted to put online this tutorial ASAP so I didn't even take time for the presentation. The markdown file is available on the github of my website. Feel free to adapt and share it, it is licensed under a Creative Commons Attribution 4.0 International License.

Prerequisites

Environment

Swarm manager and consul master will be run on the machine named bugs20. Other nodes, bugs19, bugs18, bugs17 and bugs16, will be swarm agents and consul members.

Before we start

Consul is used for the multihost networking, any other key-value store can be used -- note that the engine supports Consul Etcd, and ZooKeeper. Token (or static file) are used for the swarm agents discovery. Tokens use REST API, a static file is preferred.

The network

The network is range 192.168.196.0/25. The host named bugsN has the IP address 192.168.196.N.

The docker daemon

All nodes are running docker daemon as follow:

/usr/bin/docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eth0:2375 --cluster-store consul://127.0.0.1:8500

Options details:

-H tcp://0.0.0.0:2375

Binds the daemon to an interface to allow be part of the swarm cluster. An IP address can obviously be specificied, it is a better solution if you have several NIC.

--cluster-advertise eth0:2375

Defines the interface and the port of the docker daemon should use to advertise itself.

--cluster-store consul://127.0.0.1:8500

Defines the URL of the distributed storage backend. In our case we use consul, though there are other discovery tools that can be used, if you want to make up your mind you should be interested in reading this service discovery comparison.

As consul is distributed, the URL can be local (remember, swarm agents are also consul members) and this is more flexible as you don't have to specify the IP address of the consul master and be selected after the docker daemon has been started.

The aliases used

In the following commands these two aliases are used:

alias ldocker='docker -H tcp://0.0.0.0:2375'
alias swarm-docker='docker -H tcp://0.0.0.0:5732' #used only on the swarm manager

Be sure to have the path of the consul binary in your $PATH. Once you are in the directory just type export PATH=$PATH:$(pwd) will do the trick.

It is also assumed that the variable $IP has been properly set and exported. It can be done, thanks to .bashrc or .zshrc or else, with something like this:

export IP=$(ifconfig |grep "192.168.196."|cut -d ":" -f 2|cut -d " " -f 1)

Consul

Let's start to deploy all consul members and master as needed.

Consul master (bugs20)

consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=master -bind=$IP -client $IP

Options details:

agent -server

Start the consul agent as a server.

-bootstrap-expect 1

We expect only one master.

-node=master20

This consul server/master will be named "master20".

-bind=192.168.196.20

Specifies the IP address on which it should be bound. Optional if you have only one NIC.

-client=192.168.196.20

Specifies the RPC IP address on which the server should be bound. By default it is localhost. Note that I am unsure about the necessity of this option, and this force to add -rpc-addr=192.168.196.20:8400 for local request such as consul members -rpc-addr=192.168.196.20:8400 or consul join -rpc-addr=192.168.196.20:8400 192.168.196.9 to join the consul member that has the IP address 192.168.196.9.

Consul members (bugs{16..19})

consul agent -data-dir /tmp/consul -node=$HOSTNAME -bind=192.168.196.N

It is suggested to use tmux, or similar, with the option :setw synchronize-panes on so this one command: consul -d agent -data-dir /tmp/consul -node=$HOST -bind=$IP starts all consul members.

Join consul members

consul join -rpc-addr=192.168.196.20:8400 192.168.196.16
consul join -rpc-addr=192.168.196.20:8400 192.168.196.17
consul join -rpc-addr=192.168.196.20:8400 192.168.196.18
consul join -rpc-addr=192.168.196.20:8400 192.168.196.19

A one line command can be used too. If you are using zsh, then consul join -rpc-addr=192.168.196.20:8400 192.168.196.{16..19} is enough, or a foor loop: for i in $(seq 16 1 19); do consul join -rpc-addr=192.168.196.20:8400 192.168.196.$i;done. You can verify if your members are part of your consul deployment with the command:

consul members -rpc-addr=192.168.196.20:8400
Node      Address              Status  Type    Build  Protocol  DC
master20  192.168.196.20:8301  alive   server  0.5.2  2         dc1
bugs19    192.168.196.19:8301  alive   client  0.5.2  2         dc1
bugs18    192.168.196.18:8301  alive   client  0.5.2  2         dc1
bugs17    192.168.196.17:8301  alive   client  0.5.2  2         dc1
bugs16    192.168.196.16:8301  alive   client  0.5.2  2         dc1

Consul members and master are deployed and working. The focus will now be on docker and swarm.


Swarm

In the following the creation of swarm manager and swarm members discovery are detailed using two different methods: token and static file. Tokens use a hosted discovery service with Docker Hub while static file is just local and does not use the network (nor any server). Static file solution should be preferred (and is actually easier).

[static file] Start the swarm manager while joining swarm members

Create a file named /tmp/cluster.disco with the content swarm_agent_ip:2375.

cat /tmp/cluster.disco
192.168.196.16:2375
192.168.196.17:2375
192.168.196.18:2375
192.168.196.19:2375

Then just start the swarm manager as follow:

ldocker run -v /tmp/cluster.disco:/tmp/cluster.disco -d -p 5732:2375 swarm manage file:///tmp/cluster.disco

And you're done !

[token] Create and start the swarm manager

On the swarm master (bugs20), create a swarm:

ldocker run --rm swarm create > swarm_id

This create a swarm and save the token ID in the file swarm_id of the current directory. Once created, the swarm manager need to be run as a daemon:

ldocker run -d -p 5732:2375 swarm manage token://`cat swarm_id`

To verify if it is started you can run:

ldocker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
d28238445532        swarm               "/swarm manage token:"   5 seconds ago       Up 4 seconds        0.0.0.0:5732->2375/tcp   cranky_liskov

[token] Join swarm members into the swarm cluster

Then the swarm manager will need some swarm agent to join.

ldocker run swarm join --addr=192.168.196.16:2375 token://`cat swarm_id`
ldocker run swarm join --addr=192.168.196.17:2375 token://`cat swarm_id`
ldocker run swarm join --addr=192.168.196.18:2375 token://`cat swarm_id`
ldocker run swarm join --addr=192.168.196.19:2375 token://`cat swarm_id`

std[in|out] will be busy, these commands need to be ran on different terminals. Adding -d abefore the join should solve this and enables a for-loop to be used for the joins.

After the join of the swarm members:

auzias@bugs20:~$ ldocker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
d1de6e4ee3fc        swarm               "/swarm join --addr=1"   5 seconds ago       Up 4 seconds        2375/tcp                 fervent_lichterman
338572b87ce9        swarm               "/swarm join --addr=1"   6 seconds ago       Up 4 seconds        2375/tcp                 mad_ramanujan
7083e4d6c7ea        swarm               "/swarm join --addr=1"   7 seconds ago       Up 5 seconds        2375/tcp                 naughty_sammet
0c5abc6075da        swarm               "/swarm join --addr=1"   8 seconds ago       Up 6 seconds        2375/tcp                 gloomy_cray
ab746399f106        swarm               "/swarm manage token:"   25 seconds ago      Up 23 seconds       0.0.0.0:5732->2375/tcp   ecstatic_shockley

After the discovery of the swarm members

To verify if the members are well discovered, you can execute swarm-docker info:

auzias@bugs20:~$ swarm-docker info
Containers: 4
Images: 4
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 4
 bugs16: 192.168.196.16:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 12
  └ Reserved Memory: 0 B / 49.62 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), storagedriver=aufs
 bugs17: 192.168.196.17:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 12
  └ Reserved Memory: 0 B / 49.62 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), storagedriver=aufs
 bugs18: 192.168.196.18:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 12
  └ Reserved Memory: 0 B / 49.62 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), storagedriver=aufs
 bugs19: 192.168.196.19:2375
  └ Containers: 4
  └ Reserved CPUs: 0 / 12
  └ Reserved Memory: 0 B / 49.62 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-4-amd64, operatingsystem=Debian GNU/Linux 8 (jessie), storagedriver=aufs
CPUs: 48
Total Memory: 198.5 GiB
Name: ab746399f106

At this point swarm is deployed and all containers run will be run over different nodes. By executing several:

auzias@bugs20:~$ swarm-docker run --rm -it ubuntu bash

and then a:

auzias@bugs20:~$ swarm-docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
45b19d76d38e        ubuntu              "bash"              6 seconds ago       Up 5 seconds                            bugs18/boring_mccarthy
53e87693606e        ubuntu              "bash"              6 seconds ago       Up 5 seconds                            bugs16/amazing_colden
b18081f26a35        ubuntu              "bash"              6 seconds ago       Up 4 seconds                            bugs17/small_newton
f582d4af4444        ubuntu              "bash"              7 seconds ago       Up 4 seconds                            bugs18/naughty_banach
b3d689d749f9        ubuntu              "bash"              7 seconds ago       Up 4 seconds                            bugs17/pensive_keller
f9e86f609ffa        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs16/pensive_cray
b53a46c01783        ubuntu              "bash"              7 seconds ago       Up 4 seconds                            bugs18/reverent_ritchie
78896a73191b        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs17/gloomy_bell
a991d887a894        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs16/angry_swanson
a43122662e92        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs17/pensive_kowalevski
68d874bc19f9        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs16/modest_payne
e79b3307f6e6        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs18/stoic_wescoff
caac9466d86f        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs17/goofy_snyder
7748d01d34ee        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs16/fervent_einstein
99da2a91a925        ubuntu              "bash"              7 seconds ago       Up 5 seconds                            bugs18/modest_goodall
cd308099faac        ubuntu              "bash"              7 seconds ago       Up 6 seconds                            bugs19/furious_ritchie

As shown, the containers are disseminated over bugs{16...19}.


Multi-hosts network

A network overlay is needed so all the containers can be "plugged in" this overlay. To create this network overlay, execute:

auzias@bugs20:~$ swarm-docker network create -d overlay net
auzias@bugs20:~$ swarm-docker network ls|grep "net"
c96760503d06        net                 overlay

And voilà !

Once this overlay is created, add --net net to the command swarm-docker run --rm -it ubuntu bash and all your containers will be able to communicate natively as if they were on the same LAN. The default network is 10.0.0.0/24.

Enabling Multicast

Multicast is not support by the default overlay. Another driver is required to be able to use multicast. The docker plugin weave net does support multicast.

To use this driver, once installed, you will need to run $weave launch on all Swarm agents and Swarm manager. Then you'll need to connect the weave together, this is done by running $weave connect $SWARM_MANAGER_IP. It is not obviously the IP address of the Swarm manager but it is cleaner to do so (or use another node than the Swarm agents).

At this point the weave cluster is deployed, but no weave network has been created. Running $swarm-docker network create --driver weave weave-net will create the weave network named weave-net. Starting containers with the --net weave-net will enable them to share the same LAN and use multicast. Example of a full command to start such containers is: $swarm-docker run --rm -it --privileged --net=weave-net ubuntu bash.

这篇关于如何在多台主机之间建立码头覆盖网络?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆