Nginx 和多个 Meteor/Nodejs 应用程序的问题 [英] Trouble with Nginx and Multiple Meteor/Nodejs Apps
问题描述
我了解多个 node.js,我假设通过扩展 Meteor,可以使用 Nginx 在一台服务器上运行.我已经设置好 Nginx 并在 Ubuntu 服务器上运行得很好,我什至可以让它响应请求并将它们代理到我的一个应用程序.然而,我在尝试让 Nginx 代理流量到第二个应用程序时遇到了障碍.
I understand that multiple node.js, and I assume by extension Meteor, can be run on one server using Nginx. I've got Nginx setup and running on a Ubuntu server just fine, I can even get it to respond to requests and proxy them to one application of mine. I however hit a roadblock when trying to get Nginx to proxy traffic to the second application.
一些背景:
- 在端口 8001 上运行的第一个应用
- 在端口 8002 上运行的第二个应用
- Nginx 监听 80 端口
- 试图让 nginx 将/app1 的流量和/app2/的流量发送到应用 2
- 转到 domain:8001 和 domain:8002 都可以访问这两个应用
我的 Nginx 配置:
My Nginx config:
upstream mydomain.com {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
# the nginx server instance
server {
listen 0.0.0.0:80 default_server;
access_log /var/log/nginx/mydomain.log;
location /app2 {
rewrite /app2/(.*) /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8002;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8001;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
如果您对流量进入/app2/时可能发生的情况有任何见解,我将不胜感激!
Any insight as to what might be going on when traffic goes to /app2/ I'd greatly appreciate it!
推荐答案
proxy_pass http://127.0.0.1:8002/1; <-- these should be
proxy_pass http://**my_upstream_name**; <--these
然后
upstream my_upstream_name {
//Ngixn do a round robin load balance, some users will conect to / and othes to /app2
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
一些提示控制代理:
看看这里 @nginx 文档
take a look here @nginx docs
那么我们开始吧:
weight = NUMBER - 设置服务器的权重,如果不设置权重等于一.不平衡默认循环.
weight = NUMBER - set weight of the server, if not set weight is equal to one. unbalance the default round robin.
max_fails = NUMBER - 尝试与服务器通信失败的次数在时间段内(由参数fail_timeout分配),之后被考虑无效.如果未设置,则尝试次数为 1.值 0 关闭此检查.什么被认为是失败是由 proxy_next_upstream 或 fastcgi_next_upstream 定义的(除了不计入 max_fails 的 http_404 错误).
max_fails = NUMBER - number of unsuccessful attempts at communicating with the server within the time period (assigned by parameter fail_timeout) after which it is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check. What is considered a failure is defined by proxy_next_upstream or fastcgi_next_upstream (except http_404 errors which do not count towards max_fails).
fail_timeout = TIME - 必须发生的时间 *max_fails* 会导致服务器被视为无效的与服务器通信的不成功尝试次数,以及服务器出现故障的时间将被视为无效(在再次尝试之前).如果未设置,则时间为 10 秒.fail_timeout 与上游响应时间无关,使用 proxy_connect_timeout 和 proxy_read_timeout 来控制.
fail_timeout = TIME - the time during which must occur *max_fails* number of unsuccessful attempts at communication with the server that would cause the server to be considered inoperative, and also the time for which the server will be considered inoperative (before another attempt is made). If not set the time is 10 seconds. fail_timeout has nothing to do with upstream response time, use proxy_connect_timeout and proxy_read_timeout for controlling this.
down - 将服务器标记为永久离线,与指令 ip_hash 一起使用.
down - marks server as permanently offline, to be used with the directive ip_hash.
backup -(0.6.7 或更高版本)仅在非备份服务器全部关闭或忙碌时才使用此服务器(不能与指令 ip_hash 一起使用)
backup - (0.6.7 or later) only uses this server if the non-backup servers are all down or busy (cannot be used with the directive ip_hash)
EXAMPLE generic
upstream my_upstream_name {
server backend1.example.com weight=5;
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server unix:/tmp/backend3;
}
// proxy_pass http://my_upstream_name;
这些正是您所需要的:
如果你只是想控制一个应用程序的虚拟主机之间的卸载:
if u just want to control de load between vhosts for one app :
upstream my_upstream_name{
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8081 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8082 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8083 backup;
// proxy_pass http://my_upstream_name;
// amazingness no.1, the keyword "backup" means that this server should only be used when the rest are non-responsive
}
如果您有 2 个或更多应用:每个应用 1 个上游,例如:
upstream my_upstream_name{
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8081 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8082 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8083 backup;
}
upstream my_upstream_name_app2 {
server 127.0.0.1:8084 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8085 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8086 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8087 backup;
}
upstream my_upstream_name_app3 {
server 127.0.0.1:8088 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8089 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8090 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8091 backup;
}
希望有帮助.
这篇关于Nginx 和多个 Meteor/Nodejs 应用程序的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!