Nginx给uWSGI很老的请求? [英] Nginx is giving uWSGI very old requests?

查看:3541
本文介绍了Nginx给uWSGI很老的请求?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我看到一个奇怪的情况,Nginx或者uwsgi似乎正在建立一个长长的传入请求队列,并在客户端连接超时之后试图处理它们。我想了解并制止这种行为。这里有更多的信息:

我的设置



我的服务器使用Nginx来传递HTTPS POST请求通过Unix文件套接字到uWSGI和Flask。我有基本上所有的默认配置。
$ b

我有一个Python客户端每秒发送3个请求到服务器。



客户机运行了大约4个小时后,客户机开始报告所有的连接都超时了。 (它使用Python请求库,时间为7秒)大约10分钟后,行为发生了变化:连接开始失败,出现502 Bad Gateway。



关闭客户端。但在关闭客户端电源大约10分钟后,服务器端的uWSGI日志显示uWSGI试图回应来自该客户端的请求!并且 top 显示了使用100%CPU的uWSGI(每个工作者25%)。

在这10分钟内,每个 uwsgi.log 条目看起来像这样:
SIGPIPE:根据请求/ api / polldata写入一个关闭的管道/套接字/ fd(可能客户端断开连接) (ip 98.210.18.212)!!!
Thu May 25 07:36:37 2017 - uwsgi_response_writev_headers_and_body_do():POST / api / polldata(98.210.18.212)期间损坏的管道[core / writer.c line 296]
IOError:写入错误
[pid:34 | app:0 | req:645/12472] 98.210.18.212(){42 vars in 588 bytes} [Thu May 25 07:36:08 2017] POST / api / polldata =>在28345 msecs(HTTP / 1.1 200)中生成0个字节0字节中的2个头(核心0上的0个开关)



而Nginx的 error.log 显示了很多这样的内容:


2017/05/25 08:10:29 [错误] 36#36:* 35037连接()到unix:/srv/my_server/myproject.sock失败(11:资源暂时不可用),同时连接到上游,客户端:98.210。 18.212,服务器:example.com,请求:POST / api / polldata HTTP / 1.1,上游:uwsgi:// unix:/srv/my_server/myproject.sock:,主机:example.com:5000



大约10分钟后,uWSGI活动停止。当我重新打开客户端时,Nginx高兴地接受了POST请求,但是uWSGI在每个请求中都给出了写入一个关闭的管道的错误,就好像它永久地被破坏了一样。重新启动网络服务器的docker容器并不能解决问题,但重新启动主机会修复它。


$ b

理论

在默认的Nginx - > socket - > uWSGI配置中,有没有超时请求的长队列?我查看了uWSGI文档,看到了一些可配置的超时时间,但是所有的默认时间都在60秒左右,所以我不明白我是如何处理10分钟的请求的。我没有改变任何默认的超时设置。

应用程序使用我的小型开发服务器中几乎所有的1GB RAM,所以我认为资源限制可能会触发行为。无论哪种方式,我想改变我的配置,使得> 30秒的请求被删除500错误,而不是由uWSGI处理。我会很感激任何有关如何做到这一点的建议,以及发生什么的理论。

在uWSGI方面。

这听起来像您的后端代码可能是错误的,因为它需要太长的时间来处理请求,没有为请求实现任何速率限制,并且如果任何基础连接已经终止,则不能正确捕获(因此,您正在接收错误你的代码会尝试写入封闭的管道,甚至可能在底层连接终止之后开始处理新的请求。)


  • 按照 http://lists.unbit.it/pipermail/uwsgi /2013-February/005362.html ,如果不是uwsgi.is_,则可能要在后端中止处理连接(uwsgi.connection_fd())


  • 您可能想要探索 https://uwsgi-docs.readthedocs.io/en/latest/Options.html#harakiri 。


  • 作为最后的手段,按照 Re:了解proxy_ignore_client_abort功能(2014),您可能需要更改 uwsgi_ignore_client_abort from off on 为了不丢弃正在传递给上游的正在进行的uWSGI连接(即使客户端随后断开连接),以便不接收来自uWSGI的封闭的管道错误,并且执行任何可能的并发连接限制nginx本身(否则,连接到uWSGI将得到d如果客户端断开连接,nginx会触发nginx,而nginx将不知道有多少请求正在uWSGI中进行排队以供后续处理)。



I'm seeing a weird situation where either Nginx or uwsgi seems to be building up a long queue of incoming requests, and attempting to process them long after the client connection timed out. I'd like to understand and stop that behavior. Here's more info:

My Setup

My server uses Nginx to pass HTTPS POST requests to uWSGI and Flask via a Unix file socket. I have basically the default configurations on everything.

I have a Python client sending 3 requests per second to that server.

The Problem

After running the client for about 4 hours, the client machine started reporting that all the connections were timing out. (It uses the Python requests library with a 7-second timeout.) About 10 minutes later, the behavior changed: the connections began failing with 502 Bad Gateway.

I powered off the client. But for about 10 minutes AFTER powering off the client, the server-side uWSGI logs showed uWSGI attempting to answer requests from that client! And top showed uWSGI using 100% CPU (25% per worker).

During those 10 minutes, each uwsgi.log entry looked like this:

Thu May 25 07:36:37 2017 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /api/polldata (ip 98.210.18.212) !!! Thu May 25 07:36:37 2017 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /api/polldata (98.210.18.212) IOError: write error [pid: 34|app: 0|req: 645/12472] 98.210.18.212 () {42 vars in 588 bytes} [Thu May 25 07:36:08 2017] POST /api/polldata => generated 0 bytes in 28345 msecs (HTTP/1.1 200) 2 headers in 0 bytes (0 switches on core 0)

And the Nginx error.log shows a lot of this:

2017/05/25 08:10:29 [error] 36#36: *35037 connect() to unix:/srv/my_server/myproject.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 98.210.18.212, server: example.com, request: "POST /api/polldata HTTP/1.1", upstream: "uwsgi://unix:/srv/my_server/myproject.sock:", host: "example.com:5000"

After about 10 minutes the uWSGI activity stops. When I turn the client back on, Nginx happily accepts the POST requests, but uWSGI gives the same "writing to a closed pipe" error on every request, as if it's permanently broken somehow. Restarting the webserver's docker container does not fix the problem, but rebooting the host machine fixes it.

Theories

In the default Nginx -> socket -> uWSGI configuration, is there a long queue of requests with no timeout? I looked in the uWSGI docs and I saw a bunch of configurable timeouts, but all default to around 60 seconds, so I can't understand how I'm seeing 10-minute-old requests being handled. I haven't changed any default timeout settings.

The application uses almost all the 1GB RAM in my small dev server, so I think resource limits may be triggering the behavior.

Either way, I'd like to change my configuration so that requests > 30 seconds old get dropped with a 500 error, rather than getting processed by uWSGI. I'd appreciate any advice on how to do that, and theories on what's happening.

解决方案

This appears to be an issue downstream on the uWSGI side.

It sounds like your backend code may be faulty in that it takes too long to process the requests, does not implement any sort of rate limiting for the requests, and does not properly catch if any of the underlying connections have been terminated (hence, you're receiving the errors that your code tries to write to closed pipelines, and possibly even start processing new requests long after the underlying connections have been terminated).

这篇关于Nginx给uWSGI很老的请求?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆