带有粘性会话的docker-compose scale [英] docker-compose scale with sticky sessions

查看:462
本文介绍了带有粘性会话的docker-compose scale的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个在生产环境中需要websocket连接的Web服务器.我使用docker-compose和nginx作为代理来部署它. 所以我的撰写文件看起来像这样:

I have a webserver that requires websocket connection in production. I deploy it using docker-compose with nginx as proxy. So my compose file look like this:

version: '2'
services:
   app:
     restart: always

   nginx:
     restart: always
     ports:
       - "80:80"

现在,如果我将应用"服务扩展到多个实例,则docker-compose将在每次调用内部dns应用"时执行循环.

Now if I scale "app" service to multiple instances, docker-compose will perform round robin on each call to the internal dns "app".

有没有办法告诉docker-compose负载均衡器应用粘性会话?

Is there a way to tell docker-compose load balancer to apply sticky sessions?

另一种解决方案-是否可以使用nginx解决它?

Another solution - is there a way to solve it using nginx?

我不喜欢的可能解决方案:

应用的多个定义

version: '2'
services:
   app1:
     restart: always

   app2:
     restart: always

   nginx:
     restart: always
     ports:
       - "80:80"

(然后在nginx配置文件上,我可以在app1和app2之间定义粘性会话.)

(And then on nginx config file I can define sticky sessions between app1 and app2).

我从搜索中获得的最佳结果: https://github.com/docker/dockercloud-haproxy

但这需要我添加另一项服务(也许替换nginx吗?),并且文档对于那里的粘性会话还很差.

But this requires me to add another service (maybe replace nginx?) and the docs is pretty poor about sticky sessions there.

我希望docker只允许在撰写文件中使用简单的行对其进行配置.

I wish docker would just allow configuring it with simple line in the compose file.

谢谢!

推荐答案

看看 jwilder/nginx-代理.该映像提供了一个nginx反向代理,该代理侦听定义VIRTUAL_HOST变量的容器,并在创建和删除容器时自动更新其配置. tpcwang 的fork允许您在容器级别使用IP_HASH指令启用粘性会话.

Take a look at jwilder/nginx-proxy. This image provides an nginx reverse proxy that listens for containers that define the VIRTUAL_HOST variable and automatically updates its configuration on container creation and removal. tpcwang's fork allows you to use the IP_HASH directive on a container level to enable sticky sessions.

考虑以下撰写文件:

nginx:
  image: tpcwang/nginx-proxy
  ports:
    - "80:80"
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock:ro
app:
  image: tutum/hello-world
  environment:
    - VIRTUAL_HOST=<your_ip_or_domain_name>
    - USE_IP_HASH=1

启动并运行它,然后将app扩展到三个实例:

Let's get it up and running and then scale app to three instances:

docker-compose up -d
docker-compose scale app=3

如果您检查nginx配置文件,则会看到类似以下内容的内容:

If you check the nginx configuration file you'll see something like this:

docker-compose exec nginx cat /etc/nginx/conf.d/default.conf

...
upstream 172.16.102.132 {
    ip_hash;
            # desktop_app_3
            server 172.17.0.7:80;
            # desktop_app_2
            server 172.17.0.6:80;
            # desktop_app_1
            server 172.17.0.4:80;
}
server {
    server_name 172.16.102.132;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://172.16.102.132;
    }
}

nginx容器已自动检测到这三个实例,并已更新其配置以使用粘性会话将请求路由到所有实例.

The nginx container has automatically detected the three instances and has updated its configuration to route requests to all of them using sticky sessions.

如果我们尝试访问该应用程序,则可以看到该应用程序每次刷新时始终报告相同的主机名.如果我们删除USE_IP_HASH环境变量,我们会看到主机名实际上发生了变化,这就是说,nginx代理正在使用轮询来平衡我们的请求.

If we try to access the app we can see that it always reports the same hostname on each refresh. If we remove the USE_IP_HASH environment variable we'll see that the hostname actually changes, this is, the nginx proxy is using round robin to balance our requests.

这篇关于带有粘性会话的docker-compose scale的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆