Docker 1.12群集模式 - 在单个节点上同一服务的负载平衡任务 [英] Docker 1.12 Swarm Mode - Load balance tasks of the same service on single node

查看:229
本文介绍了Docker 1.12群集模式 - 在单个节点上同一服务的负载平衡任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Docker 1.12群组模式下,如果我在单个节点中运行同一个服务的多个任务并发布相同的端口,可以在任务之间进行任何种类的负载平衡?或者,与节点数相比,具有更多服务实例的目的是什么?

例如,

  node swarm init 
node service create --name web --replicas = 2 --publish = 80:80 nginx

现在,如果我打开浏览器并访问 http:// localhost / (刷新页面很多次)所有的连接似乎都是由同一个任务处理的,可以这样做:

  docker logs [container1] 
docker logs [container2]

$ b $ PS:好的,我知道没有一个单个节点的群组,但是如果我在群集上有10个节点(服务缩放到10个副本),似乎会发生同样的情况,然后我丢失其中一个节点(服务的两个任务将在同一个节点上运行,其中一个将不会收到连接)。



谢谢。 p>

解决方案

不,路由网格确实在所有容器之间分配请求,即使多个容器在同一个节点上运行。



您不会看到库存 nginx 图像,因为它配置了高保持活动设置,所以您的客户端在刷新时会继续返回同一个容器。



请尝试使用此自定义Nginx映像:

  docker service create --name nginx --replicas 10 -p 80:80 sixeyed / nginx-with-hostname 

sixeyed / nginx-with- hostname 是一个自动构建,您可以在 GitHub上查看源代码。)



指定了1秒的保持活动,自定义响应头 X-Host 它告诉你服务器的主机名 - 在这种情况下,它将是容器ID。



我连续提出了三个请求,这些请求都由不同的容器提供:

 >卷曲-k http://my-swarm.com/ | grep X-Host 
X-Host:5920bc3c7659

>卷曲-k http://my-swarm.com/ | grep X-Host
X-Host:eb228bb39f58

>卷曲-k http://my-swarm.com/ | grep X-Host
X-Host:891bafd52c90

这些容器都碰巧运行2节点群中的管理节点。其他请求由容器在工作人员服务,所以Docker正在分发他们的所有任务。


On Docker 1.12 Swarm Mode, if i have more than one task of the same service running in a single node and publishing the same port, is possible to do any kind of load balance between the tasks? Or, whats the purpose of having more instances of a service than the number of nodes?
Eg.

node swarm init
node service create --name web --replicas=2 --publish=80:80 nginx

Now if i open the browser and access http://localhost/ (refreshing the page many times) all connections seems to be handled by the same task, as could be seem by doing:

docker logs [container1]
docker logs [container2]

PS: Ok, i know that makes no sense having a swarm of a single node, but the same seems to occur if i have 10 nodes on the swarm (with a service scaled to 10 replicas), and then i lose one of those nodes (2 tasks of the service will be running on the same node and one of them will never receive connections).

Thanks.

解决方案

No, the routing mesh does distribute requests among all the containers, even if several containers are running on the same node.

You don't see that with the stock nginx image because it's configured with a high keep-alive setting, so your client keeps returning to the same container when you refresh.

Try this custom Nginx image instead:

docker service create --name nginx --replicas 10 -p 80:80 sixeyed/nginx-with-hostname

(sixeyed/nginx-with-hostname is an automated build, you can check the source on GitHub.)

There's a 1-second keep-alive specified, and a custom response header X-Host which tells you the hostname of the server - in this case it will be the container ID.

I made three successive requests, which all got served by different containers:

> curl -k http://my-swarm.com/ | grep X-Host
X-Host: 5920bc3c7659 

> curl -k http://my-swarm.com/ | grep X-Host    
X-Host: eb228bb39f58 

> curl -k http://my-swarm.com/ | grep X-Host
X-Host: 891bafd52c90  

Those containers all happen to be running on the manager node in a 2-node swarm. Other requests got served by containers on the worker, so Docker is distributing them around all the tasks.

这篇关于Docker 1.12群集模式 - 在单个节点上同一服务的负载平衡任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆