通过后和nginx的FastCGI的ServiceStack - 单服务器上的小负荷试验坏网关502 [英] Bad gateway 502 after small load test on fastcgi-mono-server through nginx and ServiceStack

查看:261
本文介绍了通过后和nginx的FastCGI的ServiceStack - 单服务器上的小负荷试验坏网关502的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想下的nginx和FastCGI,单服务器中运行与ServiceStack一个Web服务API。

I am trying to run a webservice API with ServiceStack under nginx and fastcgi-mono-server.

服务器启动正常和API启动并运行。我可以通过ServiceStack探查看到浏览器的响应时间,10ms的他们下运行。

The server starts fine and the API is up and running. I can see the response times in the browser through ServiceStack profiler and they run under 10ms.

但只要我用围攻(仅500使用10个连接请求)做一个小负载测试,我开始越来越502网关错误。和恢复,我必须重新启动FastCGI的单声道的服务器。

But as soon as I do a small load test using "siege" (only 500 requests using 10 connections), I start getting 502 Bad Gateway. And to recover, I have to restart the fastcgi-mono-server.

nginx的服务器是罚款。 FastCGI的,单服务器是,这个小负荷后停止响应的人。

The nginx server is fine. The fastcgi-mono-server is the one that stops responding after this small load.

我同时使用TCP和Unix套接字尝试(我知道权限问题与Unix套接字,但我已经固定的)。

I've tried using both tcp and unix sockets (I am aware of a permissions problem with the unix socket, but I already fixed that).

下面是我的配置:

server {
    listen       80;
    listen       local-api.acme.com:80;
    server_name  local-api.acme.com;

    location / {
        root   /Users/admin/dev/acme/Acme.Api/;
        index index.html index.htm default.aspx Default.aspx;
        fastcgi_index Default.aspx;
        fastcgi_pass unix:/tmp/fastcgi.socket;
        include /usr/local/etc/nginx/fastcgi_params;            
    }
}

要启动的FastCGI - 单服务器:

To start the fastcgi-mono-server:

sudo fastcgi-mono-server4 /applications=local-api.acme.com:/:/Users/admin/dev/acme/Acme.Api/ /socket=unix:/tmp/fastcgi.socket /multiplex=True /verbose=True /printlog=True

编辑:
我忘了提一个重要的细节:我在Mac OS X上运行此

I forgot to mention an important detail: I am running this on Mac OS X.

我还测试所有单可能的Web服务器配置:控制台应用程序,阿帕奇是mod_mono nginx的fast_cgi和proxy_pass模块。所有presented下单3.2.3 + Mac OS X中的几个请求后崩溃的同样的问题。

I also tested all the possible web server configuration for Mono: console application, apache mod_mono, nginx fast_cgi and proxy_pass modules. All presented the same problem of crashing after a few requests under Mono 3.2.3 + Mac OS X.

我是能够测试一台Linux机器上相同的配置,并没有任何问题存在。

I was able to test the same configuration on a Linux machine and didn't have any problems there.

因此​​,它似乎在Mac OS X。

So it seems it is a Mono/ASP.NET issue when running on Mac OS X.

推荐答案

编辑:
我在原来的问题确实看到,有在Linux下运行没有问题,但是,我面临的高负荷scenarioes(即+50并发请求),在Linux上也困难,所以这可能适用于OS X为好。 ..

I do see in the original question that there was no problems running under Linux, however, I was facing difficulties on Linux as well, under "high load" scenarioes (i.e +50 concurrent requests) so this might apply to OS X as well...

我挖得更深一些到这个问题,我找到了解决我的设置 - 我不再recieving 502网关错误,当负载测试我的简单的Hello World应用程序。我安装在/ opt /单声道单声道3.2.3的全新编译测试在Ubuntu 13.10 everyting。

I dug a little deeper into this problem and I found a solution to my setup - I'm no longer recieving 502 Bad Gateway errors when load testing my simple hello world application. I tested everyting on Ubuntu 13.10 with a fresh compile of Mono 3.2.3 installed in /opt/mono.

当你开始使用单的FastCGI-4服务器功能/ verbose = TRUE / ​​PRINTLOG =真,你会发现下面的输出:

When you start the mono-fastcgi-4 server with "/verbose=True /printlog=True" you will notice the following output:

Root directory: /some/path/you/defined
Parsed unix:/tmp/nginx-1.sockets as URI unix:/tmp/nginx-1.sockets
Listening on file /tmp/nginx-1.sockets with default permissions
Max connections: 1024
Max requests: 1024

重要行是最大连接数和最大的要求。这些主要是告诉多少活动的TCP连接,并请求单fastcgi的服务器将能够处理 - 在这种情况下,1024

The important lines are "max connections" and "max requests". These basically tells how many active TCP connections and requests the mono-fastcgi server will be able to handle - in this case 1024.

我NGINX配置读取:

My NGINX configuration read:

worker_processes 4;
events {
    worker_connections  1024;
}

所以我有4个工人,的每个可以有1024连接的。因此,NGINX愉快地接受了4096个并发连接,然后将其发送到单FastCGI的(谁只希望处理1024是conns)。因此,单是FastCGI的保护它的自我并停止服务的请求。有两个解决办法:

So I have 4 workers, which each can have 1024 connections. Thus NGINX happily accepts 4096 concurrent connections, which are then sent to mono-fastcgi (who only wishes to handle 1024 conns). Therefore, mono-fastcgi is "protecting it self" and stops serving requests. There are two solutions to this:


  1. 下请求了NGINX可以接受的金额

  2. 增加你的FastCGI上游池

1平凡改变NGINX配置读取类似解决:

1 is trivially solved by changing NGINX configuration to read something like:

worker_processes 4; # <-- or 1 here
events {
    worker_connections  256; # <--- if 1 above, then 1024 here
}

不过,这可能verly可能意味着你不能够最大您计算机上的资源。

However, this could verly likely mean that you're not able to max the resources on your machine.

2。解决的办法是多一点棘手。首先,单的fastcgi必须启动多次。为此,我创建了下面的脚本(网站内应当启动):

The solution to 2. is a bit more tricky. First, mono-fastcgi must be started multiple times. For this I created the following script (inside the website that should be started):

function startFastcgi {
    /opt/mono/bin/fastcgi-mono-server4 /loglevels=debug /printlog=true  /multiplex=false /applications=/:`pwd` /socket=$1 &
}
startFastcgi 'unix:/tmp/nginx-0.sockets'
startFastcgi 'unix:/tmp/nginx-1.sockets'
startFastcgi 'unix:/tmp/nginx-2.sockets'
startFastcgi 'unix:/tmp/nginx-3.sockets'

chmod 777 /tmp/nginx-*

这将启动4单FastCGI的工人可以接受每1024连接。然后NGINX应配置是这样的:

Which starts 4 mono-fastcgi workers that can each accept 1024 connections. Then NGINX should be configured something like this:

upstream servercom {
    server unix:/tmp/nginx-0.sockets;
    server unix:/tmp/nginx-1.sockets;
    server unix:/tmp/nginx-2.sockets;
    server unix:/tmp/nginx-3.sockets;
}
server {
    listen 80;
    location / {
        fastcgi_buffer_size 128k;
        fastcgi_buffers 4 256k;
        fastcgi_busy_buffers_size 256k;
        fastcgi_pass servercom;
        include fastcgi_params;
    }
}

这NGINX配置有4个上游工池,它将会在循环方式使用。现在,当我骂我的景气服务器并发200 1分钟,这一切都很好(又名没有502的话)

This configures NGINX with a pool of 4 "upstream workers" which it will use in a round-robin fashion. Now, when I'm hammering my server with Boom in concurrency 200 for 1 minute, it's all good (aka no 502 at all).

我希望你能以某种方式将此应用到code和做的东西的工作:)

I hope you can somehow apply this to your code and make stuff work :)

P.S:

您可以下载我的Hello World ServiceStack code,我习惯了这里测试

You can download my Hello World ServiceStack code that I used to test here.

你还可以在这里下载我的全部NGINX.config

And you can download my full NGINX.config here.

有一些需要进行调整,虽然路径,但它应该作为一个良好的基础。

There are some paths that needs to be adjusted though, but it should serve as a good base.

这篇关于通过后和nginx的FastCGI的ServiceStack - 单服务器上的小负荷试验坏网关502的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆