Nginx和多个Meteor/Nodejs应用出现问题 [英] Trouble with Nginx and Multiple Meteor/Nodejs Apps

查看:54
本文介绍了Nginx和多个Meteor/Nodejs应用出现问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我了解到,我假设多个Node.js(扩展名为Meteor)可以使用Nginx在一台服务器上运行.我已经安装好Nginx并可以在Ubuntu服务器上运行它,我什至可以使其响应请求并将其代理到我的一个应用程序中.但是,当尝试让Nginx将流量代理到第二个应用程序时,我遇到了障碍.

I understand that multiple node.js, and I assume by extension Meteor, can be run on one server using Nginx. I've got Nginx setup and running on a Ubuntu server just fine, I can even get it to respond to requests and proxy them to one application of mine. I however hit a roadblock when trying to get Nginx to proxy traffic to the second application.

一些背景:

  • 第一个在端口8001上运行的应用
  • 第二个在端口8002上运行的应用
  • Nginx在端口80上监听
  • 试图让nginx在/处向应用程序一发送流量,在/app2/处向应用程序二发送流量
  • 可以通过转到domain:8001和domain:8002来访问这两个应用程序

我的Nginx配置:

upstream mydomain.com {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}

# the nginx server instance
server {
listen 0.0.0.0:80 default_server;
access_log /var/log/nginx/mydomain.log;

location /app2 {
  rewrite /app2/(.*) /$1 break;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_pass http://127.0.0.1:8002;
  proxy_redirect off;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}

location / {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_pass http://127.0.0.1:8001;
  proxy_redirect off;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}
}

任何有关流量进入/app2/时可能发生的情况的见解,我将不胜感激!

Any insight as to what might be going on when traffic goes to /app2/ I'd greatly appreciate it!

推荐答案

proxy_pass http://127.0.0.1:8002/1;  <-- these should be 
proxy_pass http://**my_upstream_name**;  <--these

然后

upstream my_upstream_name {  

//Ngixn do a round robin load balance, some users will conect to / and othes to /app2

server 127.0.0.1:8001;

server 127.0.0.1:8002;

}

一些提示控制代理:

此处 @nginx文档

然后我们开始:

weight = NUM​​BER -设置服务器的权重,如果未设置,则权重等于1.不平衡默认的循环轮询.

weight = NUMBER - set weight of the server, if not set weight is equal to one. unbalance the default round robin.

max_fails = NUM​​BER -与服务器通信失败的尝试次数 在该时间段内(由参数fail_timeout分配),在此时间段内将其视为 不起作用.如果未设置,则尝试次数为1. 值为0将关闭此检查.可以认为失败的是由proxy_next_upstream或fastcgi_next_upstream定义的(http_404错误不计入max_fails).

max_fails = NUMBER - number of unsuccessful attempts at communicating with the server within the time period (assigned by parameter fail_timeout) after which it is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check. What is considered a failure is defined by proxy_next_upstream or fastcgi_next_upstream (except http_404 errors which do not count towards max_fails).

fail_timeout = TIME -必须发生的时间* max_fails *与服务器通信失败的尝试次数,这将导致服务器被视为无法运行,以及服务器的时间将被视为无效(在进行另一次尝试之前). 如果未设置,则时间为10秒. fail_timeout与上游响应时间无关,请使用proxy_connect_timeout和proxy_read_timeout对此进行控制.

fail_timeout = TIME - the time during which must occur *max_fails* number of unsuccessful attempts at communication with the server that would cause the server to be considered inoperative, and also the time for which the server will be considered inoperative (before another attempt is made). If not set the time is 10 seconds. fail_timeout has nothing to do with upstream response time, use proxy_connect_timeout and proxy_read_timeout for controlling this.

关闭-将服务器标记为永久脱机,与指令ip_hash一起使用.

down - marks server as permanently offline, to be used with the directive ip_hash.

备份-(0.6.7或更高版本)仅在非备份服务器都处于关闭或繁忙状态(不能与指令ip_hash一起使用)时才使用此服务器

backup - (0.6.7 or later) only uses this server if the non-backup servers are all down or busy (cannot be used with the directive ip_hash)

EXAMPLE generic

    upstream  my_upstream_name  {
      server   backend1.example.com    weight=5;
      server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
      server   unix:/tmp/backend3;
    }
//   proxy_pass http://my_upstream_name; 

这些就是您需要的:

如果您只是想控制一个应用程序在虚拟主机之间的卸载:

if u just want to control de load between vhosts for one app :

 upstream  my_upstream_name{
          server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
          server   127.0.0.1:8083 backup;
//  proxy_pass http://my_upstream_name; 
// amazingness no.1, the keyword "backup" means that this server should only be used when the rest are non-responsive
    } 

如果您有2个或更多应用程序:每个应用程序1个上游,例如:

upstream  my_upstream_name{
              server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8081          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8082          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8083 backup;  
            } 
upstream  my_upstream_name_app2  {
              server   127.0.0.1:8084          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8085          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8086          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8087 backup; 
            } 
upstream  my_upstream_name_app3  {
              server   127.0.0.1:8088          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8089          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8090          max_fails=3  fail_timeout=30s;
              server   127.0.0.1:8091 backup;  
            } 

希望它会有所帮助.

这篇关于Nginx和多个Meteor/Nodejs应用出现问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆