节点中有数千个并发http请求 [英] Thousands of concurrent http requests in node

查看:739
本文介绍了节点中有数千个并发http请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个包含数千个网址的列表。我想用http请求进行健康检查(healt.php)。

I have a list of thousands of URLs. I want to get a health check (healt.php) with an http request.

这是我的问题:

我在节点中编写了一个应用程序。它以集中的方式提出请求。我使用一个变量来控制我打开多少个并发连接。 300。

I've wrote an application in node. It makes the requests in a pooled way. I use a variable to control how many concurrent connections I open. 300, ie. One by one, each request is so fast, no more than 500ms.

但是当我运行应用程序时,结果是:

But when I run the application, the result is:

$ node agent.js

200ms   url1.tld
250ms   url4.tld
400ms   url2.tld
530ms   url8.tld
800ms   url3.tld
...
2300ms  urlN.tld
...
30120ms urlM.tld

似乎并发有限制。当我执行

It seems that there is a limit in concurrency. When I execute

$ ps axo nlwp,cmd | grep node

结果是:

6 node agent.js

并发连接。我找到一个evn变量来控制节点中的并发性:UV_THREADPOOL_SIZE

There are 6 threads to manage all concurrent connections. I found an evn variable to control concurrency in node: UV_THREADPOOL_SIZE

$ UV_THREADPOOL_SIZE=300 node agent.js

200ms   url1.tld
210ms   url4.tld
220ms   url2.tld
240ms   url8.tld
400ms   url3.tld
...
800ms  urlN.tld
...
1010ms urlM.tld

还是有,但结果好多了。使用ps命令:

The problem is still there, but the results are much better. With the ps command:

$ ps axo nlwp,cmd | grep node

132 node agent.js

下一步:源代码节点,我在deps / uv / src / unix / threadpool.c中找到一个常量:

Next step: Looking in the source code of node, I've found a constant in deps/uv/src/unix/threadpool.c:

#define MAX_THREADPOOL_SIZE 128

好的。我已将该值更改为2048,编译并安装了节点并运行命令

Ok. I've changed that value to 2048, compiled and installed node and run once the command

$ UV_THREADPOOL_SIZE=300 node agent.js

似乎确定。响应时间不会逐渐增加。但是当我尝试一个更大的并发号码problema出现。但是这次它与线程数无关,因为使用ps命令我看到有足够的线程。

All seems ok. Response times are not incrementing gradually. But when I try with a bigger concurrency number the problema appears. But this time it's not related to the number of threads, because with the ps command I see there are enough of them.

我试图在golang写同样的应用程序,但结果是一样的。时间正在逐渐增加。

I tried to write the same application in golang, but the results are the same. The time is increasing gradually.

所以,我的问题是:同意限制在哪里?内存和CPU负载和带宽不超出范围。我调整了sysctl.conf和limits.conf以避免一些限制(文件,端口,内存,...)。

So, my question is: Where is the concurrence limit? memory and cpu load and bandwith are not out of bounds. And I tuned sysctl.conf and limits.conf to avoid some limits (files, ports, memory, ...).

推荐答案

您可能被http.globalAgent的 maxSockets 限制。根据您是否使用http或https,请查看此问题是否能解决您的问题:

You may be throttled by http.globalAgent's maxSockets. Depending on whether you're using http or https, see if this fixes your problem:

require('http').globalAgent.maxSockets = Infinity;
require('https').globalAgent.maxSockets = Infinity;

这篇关于节点中有数千个并发http请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆