Nginx,fastcgi和开放式套接字 [英] nginx, fastcgi and open sockets

查看:203
本文介绍了Nginx,fastcgi和开放式套接字的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在Nginx上使用fastcgi,但是遇到了一些问题. Nginx不重用连接,它在BeginRequest标志中给出0,因此应用程序应在请求完成后关闭连接.

I'm experimenting using fastcgi on nginx, but I've run into some problems. Nginx doesn't reuse connections, it gives 0 in BeginRequest flags, so the application should close the connection after the request has finished.

我要关闭以下代码:

socket.shutdown(SocketShutdown.BOTH);
socket.close();

问题在于连接实际上并未关闭..它们会以TIME_WAIT的形式持续存在,而nginx(或其他东西)将不会继续打开新的连接. 我的猜测是关闭套接字时我做错了什么,但我不知道该怎么办..在相关说明中-如何获取nginx来保持连接打开?

The problem is that the connections are not actually closed.. They linger on as TIME_WAIT, and nginx (or something) wont't keep opening new connections. My guess is I'm doing something wrong when closing the sockets, but I don't know what.. On a related note - how can I get nginx to keep connections open?

这是使用nginx 1.0.6和D 2.055

This is using nginx 1.0.6 and D 2.055

还没走近,但我还检查了linger选项,它关闭了:

Haven't gotten any closer, but I also checked the linger option, and it's off:

linger l;
socket.getOption(SocketOptionLevel.SOCKET, SocketOption.LINGER, l);
assert(l.on == 0); // off

但是

getOption返回4.不知道那是什么意思.返回值未记录.

getOption returns 4 though.. No idea what that means. The return value is undocumented.

我也尝试在发送的最后一条消息上使用TCP_NODELAY,但这也没有任何作用:

I've also tried using TCP_NODELAY on the last message sent, but this didn't have any effect either:

socket.setOption(SocketOptionLevel.SOCKET, SocketOption.TCP_NODELAY, 1);

nginx 1.1.4支持保持活动连接.但是,这不能按预期方式工作.正确报告服务器负责连接生存期管理,但是它仍为每个请求创建一个新的套接字.

nginx 1.1.4 supportes keep alive connections. This doesn't work as expected though.. Is correctly report that the server is responsible for connection lifetime management, but it still creates a new socket for each request.

推荐答案

NGINX代理keepalive

关于fastcgi的nginx(v1.1)keepalive.正确的配置方法如下:

NGINX proxy keepalive

Regarding nginx (v1.1) keepalive for fastcgi. The proper way to configure it is as follows:

upstream fcgi_backend {
  server localhost:9000;
  keepalive 32;
}

server {
  ...
  location ~ \.php$ {
    fastcgi_keep_conn on;
    fastcgi_pass fcgi_backend;
    ...
  }
}

TIME_WAIT

TCP TIME_WAIT连接状态与延迟,tcp_no_delays,超时等无关.它完全由OS内核管理,并且只能受系统范围内的配置选项的影响.通常这是不可避免的.这只是TCP协议的工作方式.在此处避免TIME_WAIT的最根本方法是通过将linger = ON和linger_timeout = 0设置为在关闭时重置(发送 RST 数据包)TCP连接.但是,对于正常操作,建议不要这样做,因为这样可能会丢失未发送的数据.仅在错误情况下(超时等)重置套接字.

The most radical way to avoid TIME_WAIT is to reset (send RST packet) TCP connection on close by setting linger=ON and linger_timeout=0. But doing it this way is not recommended for normal operation as you might loose unsent data. Only reset socket under error conditions (timeouts, etc.).

我将尝试以下操作.发送完所有数据后,调用socket.shutdown(WRITE)(这将向另一方发送 FIN 数据包),并且尚未关闭套接字.然后继续从套接字读取,直到收到指示该连接被另一端关闭的指示(在C中通常由0长度的read()指示).收到此指示后,关闭插座.在此处上详细了解.

What I would try is the following. After you send all your data call socket.shutdown(WRITE) (this will send FIN packet to the other party) and do not close the socket yet. Then keep on reading from the socket until you receive indication that the connection is closed by the other end (in C that is typically indicated by 0-length read()). After receiving this indication, close the socket. Read more about it here.

如果要开发任何类型的网络服务器,则必须研究这些选项,因为它们确实会影响性能.如果不使用这些参数,则每个发送的数据包可能会遇到〜20ms的延迟( Nagle延迟).尽管这些延迟看起来很小,但它们可能会对您的每秒请求数统计信息产生不利影响. 此处

If you are developing any sort of network server you must study these options as they do affect performance. Without using these you might experience ~20ms delays (Nagle delay) on every packet sent over. Although these delays look small they can adversely affect your requests-per-second statistics. A good reading about it is here.

关于套接字的另一个很好的参考是此处.

Another good reference to read about sockets is here.

我同意其他评论者的观点,即对您的后端使用FastCGI协议可能不是一个好主意.如果您关心性能,那么您应该实现自己的nginx模块(或者,如果看起来太困难,则实现其他服务器的模块,例如

I would agree with other commenters that using FastCGI protocol to your backend might not be very good idea. If you care about performance then you should implement your own nginx module (or, if that seems too difficult, a module for some other server such as NXWEB). Otherwise use HTTP. It is easier to implement and is far more versatile than FastCGI. And I would not say HTTP is much slower than FastCGI.

这篇关于Nginx,fastcgi和开放式套接字的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆