如何配置Docker端口映射以使用Nginx作为上游代理? [英] How to configure Docker port mapping to use Nginx as an upstream proxy?

查看:122
本文介绍了如何配置Docker端口映射以使用Nginx作为上游代理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

更新II




现在是2015年7月16日,事情又发生了变化。我已经
Jason Wilder发现了这个自动容器/ a>:
https://github.com/jwilder/nginx-proxy ,它解决了
这个问题,只要它需要$ code> docker运行
容器。这是我现在用来解决这个问题的解决方案。





更新




现在是2015年7月,而对于Docker容器网络,
的事情发生了巨大变化。现在有许多不同的
产品可以解决这个问题(以各种方式)。



你应该使用这篇文章来获得对< a href =https://docs.docker.com/userguide/dockerlinks/> docker --link 服务发现方法,即基本的,它的工作非常好,实际上比其他解决方案要少的花哨跳舞。这是有限的,因为在任何给定的群集中的单独的主机上容纳网络容器是相当困难的,并且一旦联网,容器就不能被重新启动,但是确实提供了一种快速而相对简单的方式来在同一主机上对容器进行网络连接。了解您可能用什么软件来解决这个问题,这是一个很好的方式,实际上是在引擎盖下。



此外,你可能会还要查看Docker的新生 网络 ,Hashicorp的 consul ,Weaveworks 编织 ,Jeff Lindsay的 progrium / consul & gliderlabs / registrator 和Google的 Kubernetes



还有 CoreOS 产品使用 etcd 舰队 法兰绒



如果你真的想参加派对,你可以启动一个集群来运行 Mesosphere Deis Flynn



如果你是新来的网络(像我一样),那么你应该拿出你的阅读眼镜,弹出彩绘天空与Enya的最佳 Wi-Hi-Fi,打破啤酒 - 这将是一段时间,然后才真正明白你正在试图做什么。提示:您正在尝试在群集控制平面中实施服务发现层。这是一个非常好的方式来度过一个星期六晚上。



这很有趣,但我希望我有时间更好地教育我一般网络潜水进来。我最终从仁慈的数字海洋教程神中找到了几个帖子: 网络术语简介 了解...网络 。我建议在潜水之前首先阅读几次。



玩得开心!







原始帖子



我似乎不能掌握 Docker 容器的端口映射。具体来说,如何将请求从Nginx传递到另一个容器,在同一个服务器上侦听另一个端口。



我有一个Nginx容器的Dockerfile,如下所示: / p>

  FROM ubuntu:14.04 
MAINTAINER Me< me@myapp.com>

RUN apt-get update&& apt-get install -y htop git nginx

添加网站启用/ api.myapp.com /etc/nginx/sites-enabled/api.myapp.com
添加网站启用/ app.myapp.com /etc/nginx/sites-enabled/app.myapp.com
添加nginx.conf /etc/nginx/nginx.conf

RUN echodaemon off; >> /etc/nginx/nginx.conf

EXPOSE 80 443

CMD [service,nginx,start]




然后$ code> api.myapp.com config文件如下所示:

 上游api_upstream {

server 0.0.0.0:3333;

}


服务器{

listen 80;
server_name api.myapp.com;
返回301 https://api.myapp.com/$request_uri;

}


服务器{

听443;
server_name api.mypp.com;

位置/ {

proxy_http_version 1.1;
proxy_set_header升级$ http_upgrade;
proxy_set_header连接'升级'
proxy_set_header Host $ host;
proxy_set_header X-Real-IP $ remote_addr;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $ scheme;
proxy_cache_bypass $ http_upgrade;
proxy_pass http:// api_upstream;

}

}

然后另一个对于 app.myapp.com



然后我运行:

  sudo docker run -p 80:80 -p 443:443 -d --name Nginx myusername / nginx 
/ pre>



这一切都挺好,但请求没有被传递到其他容器/端口。当我进入Nginx容器并检查日志时,我看到没有错误。



任何帮助?

解决方案

@ T0xicCode的答案是正确的,但我以为我会扩展细节,因为它实际上花了我大约20个小时才能最终得到一个解决方案。



如果你想在自己的容器中运行Nginx,并将其用作反向代理来负载平衡同一服务器实例上的多个应用程序,然后您需要遵循的步骤如下:




链接您的容器



当您 docker运行您的容器时,通常通过将shell脚本输入到用户数据,您可以声明链接到任何其他运行容器。这意味着您需要按顺序启动您的容器,只有后者的容器可以链接到前一个容器。像这样:

 #!/ bin / bash 
sudo docker run -p 3000:3000 - name API mydockerhub / api
sudo docker run -p 3001:3001 --link API:API --name App mydockerhub / app
sudo docker run -p 80:80 -p 443:443 --link API:API --link应用程序:应用程序 - 名称Nginx mydockerhub / nginx

所以在这个例子中,code> API 容器未链接到任何其他容器,但
应用程序容器链接到 API Nginx 链接到 API 应用程序



其结果是更改为 env vars和<$位于 API 应用程序中的c $ c> / etc / hosts 。结果如下:




/ etc / hosts



Nginx 容器中运行 cat / etc / hosts 将产生以下内容: p>

  172.17.0.5 0fd9a40ab5ec 
127.0.0.1 localhost
:: 1 localhost ip6-localhost ip6-loopback
fe00 :: 0 ip6-localnet
ff00 :: 0 ip6-mcastprefix
ff02 :: 1 ip6-allnodes
ff02 :: 2 ip6-allrouters
172.17.0.3应用程序
172.17.0.2 API




ENV Vars



Nginx env c> container将产生以下内容:

  API_PORT = tcp://172.17.0.2:3000 
API_PORT_3000_TCP_PROTO = tcp
API_PORT_3000_TCP_PORT = 3000
API_PORT_3000_TCP_ADDR = 172.17.0.2

APP_PORT = tcp://172.17.0.3:3001
APP_PORT_3001_TCP_PROTO = tcp
APP_PORT_3001_TCP_PORT = 3001
APP_PORT_3001_TCP_ADDR = 172.17.0.3

我已经截断了许多实际的var,但上面是代理您的容器流量所需的关键值。



要获取在运行容器中运行上述命令的shell,请使用以下命令:



sudo docker exec -i -t Nginx bash



你可以看到你现在同时拥有 / etc / hosts 文件条目和 env vars,其中包含链接的任何容器的本地IP地址。据我所知,这是所有的事情发生时,您运行容器与链接选项声明。但是您现在可以使用此信息在 Nginx 容器中配置 nginx






配置Nginx



这是一个有点棘手的地方,有几个选择。您可以选择将您的网站配置为指向 / etc / hosts 文件中创建 docker 的条目,或您可以使用 ENV vars并在您的上运行字符串替换(我使用 sed ) nginx.conf 以及可能在 / etc / nginx / sites-enabled 文件夹中的任何其他conf文件,以插入IP值。 p>




选项A:使用ENV Vars配置Nginx


这是我去的选项,因为我无法获得
/ etc / hosts 文件选项工作。我会尝试选项B很快
,并更新这个帖子与任何发现。


此选项的主要区别并且使用 / etc / hosts 文件选项是如何编写您的 Dockerfile 以使用shell脚本作为 CMD 参数,它反过来处理字符串替换,将IP值从 ENV 复制到您的conf文件。 / p>

以下是我最终提供的一组配置文件:



Dockerfile p>

  FROM ubuntu:14.04 
维护者您的姓名< you@myapp.com>

RUN apt-get update&& apt-get install -y nano htop git nginx

添加nginx.conf /etc/nginx/nginx.conf
添加api.myapp.conf / etc / nginx / sites-enabled / api .myapp.conf
ADD app.myapp.conf /etc/nginx/sites-enabled/app.myapp.conf
添加Nginx-Startup.sh /etc/nginx/Nginx-Startup.sh

EXPOSE 80 443

CMD [/bin/bash\",\"/etc/nginx/Nginx-Startup.sh]

nginx.conf

 守护程序关闭; 
用户www-data;
pid /var/run/nginx.pid;
worker_processes 1;


事件{
worker_connections 1024;
}


http {

#基本设置

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 33;
types_hash_max_size 2048;

server_tokens关闭;
server_names_hash_bucket_size 64;

include /etc/nginx/mime.types;
default_type application / octet-stream;


#记录设置
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;


#Gzip设置

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 3;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text / plain text / xml text / css application / x-javascript application / json;
gzip_disableMSIE [1-6] \。(?!。* SV1);

#虚拟主机配置
include / etc / nginx / sites-enabled / *;

#错误页面配置
#error_page 403 404 500 502 / srv / Splash;


}




注意:在 nginx.conf 文件中包含守护程序很重要,以确保您的容器不立即退出


api.myapp.conf

 上游api_upstream {
服务器APP_IP:3000;
}

server {
listen 80;
server_name api.myapp.com;
返回301 https://api.myapp.com/$request_uri;
}

server {
listen 443;
server_name api.myapp.com;

位置/ {
proxy_http_version 1.1;
proxy_set_header升级$ http_upgrade;
proxy_set_header连接'升级';
proxy_set_header Host $ host;
proxy_set_header X-Real-IP $ remote_addr;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $ scheme;
proxy_cache_bypass $ http_upgrade;
proxy_pass http:// api_upstream;
}

}

Nginx启动。 sh

 #!/ bin / bash 
sed -i's / APP_IP /'$ API_PORT_3000_TCP_ADDR'/ g'/etc/nginx/sites-enabled/api.myapp.com
sed -i's / APP_IP /'$ APP_PORT_3001_TCP_ADDR'/ g'/ etc / nginx / sites-enabled / app.myapp.com

服务nginx开始

我会离开由你自己完成关于 nginx.conf api.myapp.conf 的大部分内容的功课。 / p>

魔术发生在 Nginx-Startup.sh 中,我们使用 sed APP_IP 占位符上执行字符串替换,我们已经写入上游块中的code> api.myapp.conf 和 app.myapp.conf 文件。



这个ask.ubuntu.com的问题很好地解释了:
使用命令查找和替换文件中的文本


GOTCHA
在OSX上, sed 具体处理选项不同, -i 标志。
在Ubuntu上, -i 标志将处理替换就地;它
将打开文件,更改文本,然后保存相同的
文件。
在OSX上, -i 标志需要您希望生成的文件的文件扩展名。如果您使用的文件没有扩展名,则必须输入$作为 -i 标志的值。



GOTCHA
要在正则表达式中使用 sed 用于找到要替换的字符串的ENV vars,您需要将var包装在双引号内。因此,正确的,虽然是一个看起来很好的语法如上所述。


所以docker已经启动了我们的容器并触发了 Nginx-Startup.sh 要运行的脚本,它使用 sed 来更改值 APP_IP 到我们在 sed 命令中提供的相应的 ENV 变量。我们现在在我们的 / etc / nginx / sites-enabled 目录中具有来自 ENV 的I​​P地址的conf文件启动容器时停放器设置的变量。在 api.myapp.conf 文件中,您将看到上游块已更改为:

 上游api_upstream {
服务器172.0.0.2:3000;
}

您看到的IP地址可能不同,但我注意到通常 172.0.0.x



您现在应该正确布置所有路线。


GOTCHA
运行初始实例启动后,无法重新启动/重新运行任何容器。 Docker在启动时为每个容器提供一个新的IP,似乎不再使用它以前使用过的任何一个。所以 api.myapp.com 将第一次获得172.0.0.2,但是下次会得到172.0.0.4。但是, Nginx 将已经将第一个IP设置为其conf文件,或者在其 / etc / hosts 文件中它将无法确定 api.myapp.com 的新IP。解决这个问题的方法很可能使用 CoreOS 及其 etcd 服务,在我的有限的理解中,注册到同一个 CoreOS 群集的所有机器的 ENV 这是下一个要搭载玩具的玩具。





选项B:使用 / etc / hosts 文件条目



成为更快,更简单的方法,但我无法让它工作。表面上,您只需将 / etc / hosts 条目的值输入到您的 api.myapp.conf app.myapp.conf 文件,但我无法得到这种方法。


更新:
有关如何使此方法正常工作的说明,请参阅 @Wes Tod's answer


这是我在 api.myapp.conf 中所做的尝试: / p>

 上游api_upstream {
server API:3000;
}

考虑到我的 / etc /主持人如下所示: 172.0.0.2 API 我认为它只会拉入价值,但似乎不是。 p>

我也从我的弹性负载平衡器中提供了几个辅助问题,从所有AZ采购,这样可能是问题当我尝试这条路线。相反,我不得不学习如何处理在Linux中替换字符串,这样很有趣。我会在一段时间内尝试一下,看看它是怎么回事。


Update II

It's now July 16th, 2015 and things have changed again. I've discovered this automagical container from Jason Wilder: https://github.com/jwilder/nginx-proxy and it solves this problem in about as long as it takes to docker run the container. This is now the solution I'm using to solve this problem.

Update

It's now July of 2015 and things have change drastically with regards to networking Docker containers. There are now many different offerings that solve this problem (in a variety of ways).

You should use this post to gain a basic understanding of the docker --link approach to service discovery, which is about as basic as it gets, works very well, and actually requires less fancy-dancing than most of the other solutions. It is limited in that it's quite difficult to network containers on separate hosts in any given cluster, and containers cannot be restarted once networked, but does offer a quick and relatively easy way to network containers on the same host. It's a good way to get an idea of what the software you'll likely be using to solve this problem is actually doing under the hood.

Additionally, you'll probably want to also check out Docker's nascent network, Hashicorp's consul, Weaveworks weave, Jeff Lindsay's progrium/consul & gliderlabs/registrator, and Google's Kubernetes.

There's also the CoreOS offerings that utilize etcd, fleet, and flannel.

And if you really want to have a party you can spin up a cluster to run Mesosphere, or Deis, or Flynn.

If you're new to networking (like me) then you should get out your reading glasses, pop "Paint The Sky With Stars — The Best of Enya" on the Wi-Hi-Fi, and crack a beer — it's going to be a while before you really understand exactly what it is you're trying to do. Hint: You're trying to implement a Service Discovery Layer in your Cluster Control Plane. It's a very nice way to spend a Saturday night.

It's a lot of fun, but I wish I'd taken the time to educate myself better about networking in general before diving right in. I eventually found a couple posts from the benevolent Digital Ocean Tutorial gods: Introduction to Networking Terminology and Understanding ... Networking. I suggest reading those a few times first before diving in.

Have fun!



Original Post

I can't seem to grasp port mapping for Docker containers. Specifically how to pass requests from Nginx to another container, listening on another port, on the same server.

I've got a Dockerfile for an Nginx container like so:

FROM ubuntu:14.04
MAINTAINER Me <me@myapp.com>

RUN apt-get update && apt-get install -y htop git nginx

ADD sites-enabled/api.myapp.com /etc/nginx/sites-enabled/api.myapp.com
ADD sites-enabled/app.myapp.com /etc/nginx/sites-enabled/app.myapp.com
ADD nginx.conf /etc/nginx/nginx.conf

RUN echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80 443

CMD ["service", "nginx", "start"]



And then the api.myapp.com config file looks like so:

upstream api_upstream{

    server 0.0.0.0:3333;

}


server {

    listen 80;
    server_name api.myapp.com;
    return 301 https://api.myapp.com/$request_uri;

}


server {

    listen 443;
    server_name api.mypp.com;

    location / {

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_pass http://api_upstream;

    }

}

And then another for app.myapp.com as well.

And then I run:

sudo docker run -p 80:80 -p 443:443 -d --name Nginx myusername/nginx


And it all stands up just fine, but the requests are not getting passed-through to the other containers/ports. And when I ssh into the Nginx container and inspect the logs I see no errors.

Any help?

解决方案

@T0xicCode's answer is correct, but I thought I would expand on the details since it actually took me about 20 hours to finally get a working solution implemented.

If you're looking to run Nginx in its own container and use it as a reverse proxy to load balance multiple applications on the same server instance then the steps you need to follow are as such:

Link Your Containers

When you docker run your containers, typically by inputting a shell script into User Data, you can declare links to any other running containers. This means that you need to start your containers up in order and only the latter containers can link to the former ones. Like so:

#!/bin/bash
sudo docker run -p 3000:3000 --name API mydockerhub/api
sudo docker run -p 3001:3001 --link API:API --name App mydockerhub/app
sudo docker run -p 80:80 -p 443:443 --link API:API --link App:App --name Nginx mydockerhub/nginx

So in this example, the API container isn't linked to any others, but the App container is linked to API and Nginx is linked to both API and App.

The result of this is changes to the env vars and the /etc/hosts files that reside within the API and App containers. The results look like so:

/etc/hosts

Running cat /etc/hosts within your Nginx container will produce the following:

172.17.0.5  0fd9a40ab5ec
127.0.0.1   localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3  App
172.17.0.2  API



ENV Vars

Running env within your Nginx container will produce the following:

API_PORT=tcp://172.17.0.2:3000
API_PORT_3000_TCP_PROTO=tcp
API_PORT_3000_TCP_PORT=3000
API_PORT_3000_TCP_ADDR=172.17.0.2

APP_PORT=tcp://172.17.0.3:3001
APP_PORT_3001_TCP_PROTO=tcp
APP_PORT_3001_TCP_PORT=3001
APP_PORT_3001_TCP_ADDR=172.17.0.3

I've truncated many of the actual vars, but the above are the key values you need to proxy traffic to your containers.

To obtain a shell to run the above commands within a running container, use the following:

sudo docker exec -i -t Nginx bash

You can see that you now have both /etc/hosts file entries and env vars that contain the local IP address for any of the containers that were linked. So far as I can tell, this is all that happens when you run containers with link options declared. But you can now use this information to configure nginx within your Nginx container.



Configuring Nginx

This is where it gets a little tricky, and there's a couple of options. You can choose to configure your sites to point to an entry in the /etc/hosts file that docker created, or you can utilize the ENV vars and run a string replacement (I used sed) on your nginx.conf and any other conf files that may be in your /etc/nginx/sites-enabled folder to insert the IP values.



OPTION A: Configure Nginx Using ENV Vars

This is the option that I went with because I couldn't get the /etc/hosts file option to work. I'll be trying Option B soon enough and update this post with any findings.

The key difference between this option and using the /etc/hosts file option is how you write your Dockerfile to use a shell script as the CMD argument, which in turn handles the string replacement to copy the IP values from ENV to your conf file(s).

Here's the set of configuration files I ended up with:

Dockerfile

FROM ubuntu:14.04
MAINTAINER Your Name <you@myapp.com>

RUN apt-get update && apt-get install -y nano htop git nginx

ADD nginx.conf /etc/nginx/nginx.conf
ADD api.myapp.conf /etc/nginx/sites-enabled/api.myapp.conf
ADD app.myapp.conf /etc/nginx/sites-enabled/app.myapp.conf
ADD Nginx-Startup.sh /etc/nginx/Nginx-Startup.sh

EXPOSE 80 443

CMD ["/bin/bash","/etc/nginx/Nginx-Startup.sh"]

nginx.conf

daemon off;
user www-data;
pid /var/run/nginx.pid;
worker_processes 1;


events {
    worker_connections 1024;
}


http {

    # Basic Settings

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 33;
    types_hash_max_size 2048;

    server_tokens off;
    server_names_hash_bucket_size 64;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;


    # Logging Settings
    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;


    # Gzip Settings

gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 3;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/xml text/css application/x-javascript application/json;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    # Virtual Host Configs  
    include /etc/nginx/sites-enabled/*;

    # Error Page Config
    #error_page 403 404 500 502 /srv/Splash;


}

NOTE: It's important to include daemon off; in your nginx.conf file to ensure that your container doesn't exit immediately after launching.

api.myapp.conf

upstream api_upstream{
    server APP_IP:3000;
}

server {
    listen 80;
    server_name api.myapp.com;
    return 301 https://api.myapp.com/$request_uri;
}

server {
    listen 443;
    server_name api.myapp.com;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_pass http://api_upstream;
    }

}

Nginx-Startup.sh

#!/bin/bash
sed -i 's/APP_IP/'"$API_PORT_3000_TCP_ADDR"'/g' /etc/nginx/sites-enabled/api.myapp.com
sed -i 's/APP_IP/'"$APP_PORT_3001_TCP_ADDR"'/g' /etc/nginx/sites-enabled/app.myapp.com

service nginx start

I'll leave it up to you to do your homework about most of the contents of nginx.conf and api.myapp.conf.

The magic happens in Nginx-Startup.sh where we use sed to do string replacement on the APP_IP placeholder that we've written into the upstream block of our api.myapp.conf and app.myapp.conf files.

This ask.ubuntu.com question explains it very nicely: Find and replace text within a file using commands

GOTCHA On OSX, sed handles options differently, the -i flag specifically. On Ubuntu, the -i flag will handle the replacement 'in place'; it will open the file, change the text, and then 'save over' the same file. On OSX, the -i flag requires the file extension you'd like the resulting file to have. If you're working with a file that has no extension you must input '' as the value for the -i flag.

GOTCHA To use ENV vars within the regex that sed uses to find the string you want to replace you need to wrap the var within double-quotes. So the correct, albeit wonky-looking, syntax is as above.

So docker has launched our container and triggered the Nginx-Startup.sh script to run, which has used sed to change the value APP_IP to the corresponding ENV variable we provided in the sed command. We now have conf files within our /etc/nginx/sites-enabled directory that have the IP addresses from the ENV vars that docker set when starting up the container. Within your api.myapp.conf file you'll see the upstream block has changed to this:

upstream api_upstream{
    server 172.0.0.2:3000;
}

The IP address you see may be different, but I've noticed that it's usually 172.0.0.x.

You should now have everything routing appropriately.

GOTCHA You cannot restart/rerun any containers once you've run the initial instance launch. Docker provides each container with a new IP upon launch and does not seem to re-use any that its used before. So api.myapp.com will get 172.0.0.2 the first time, but then get 172.0.0.4 the next time. But Nginx will have already set the first IP into its conf files, or in its /etc/hosts file, so it won't be able to determine the new IP for api.myapp.com. The solution to this is likely to use CoreOS and its etcd service which, in my limited understanding, acts like a shared ENV for all machines registered into the same CoreOS cluster. This is the next toy I'm going to play with setting up.



OPTION B: Use /etc/hosts File Entries

This should be the quicker, easier way of doing this, but I couldn't get it to work. Ostensibly you just input the value of the /etc/hosts entry into your api.myapp.conf and app.myapp.conf files, but I couldn't get this method to work.

UPDATE: See @Wes Tod's answer for instructions on how to make this method work.

Here's the attempt that I made in api.myapp.conf:

upstream api_upstream{
    server API:3000;
}

Considering that there's an entry in my /etc/hosts file like so: 172.0.0.2 API I figured it would just pull in the value, but it doesn't seem to be.

I also had a couple of ancillary issues with my Elastic Load Balancer sourcing from all AZ's so that may have been the issue when I tried this route. Instead I had to learn how to handle replacing strings in Linux, so that was fun. I'll give this a try in a while and see how it goes.

这篇关于如何配置Docker端口映射以使用Nginx作为上游代理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆