在高负载下在nodejs中连接EADDRNOTAVAIL - 如何更快地释放或重用TCP端口? [英] connect EADDRNOTAVAIL in nodejs under high load - how to faster free or reuse TCP ports?

查看:41
本文介绍了在高负载下在nodejs中连接EADDRNOTAVAIL - 如何更快地释放或重用TCP端口?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个基于 express-framework 的类似 wiki 的小型 Web 应用程序,它使用弹性搜索作为后端.对于每个请求,它基本上只进入弹性搜索数据库,检索对象并返回由车把模板引擎呈现的对象.与弹性搜索的通信是通过 HTTP 进行的

I have a small wiki-like web application based on the express-framework which uses elastic search as it's back-end. For each request it basically only goes to the elastic search DB, retrieves the object and returns it rendered with by the handlebars template engine. The communication with elastic search is over HTTP

只要我只有一个 node-js 实例在运行,它就可以很好地工作.在我更新我的代码以使用集群之后(如 nodejs-documentation 我开始遇到以下错误:connect EADDRNOTAVAIL

This works great as long as I have only one node-js instance running. After I updated my code to use the cluster (as described in the nodejs-documentation I started to encounter the following error: connect EADDRNOTAVAIL

当我运行 3 个或更多的 python 脚本不断从我的服务器检索一些 URL 时,会出现此错误.使用 3 个脚本,我可以检索大约 45,000 个页面,运行 4 个或更多脚本,它介于 30,000 到 37,000 个页面之间 仅运行 2 个或 1 个脚本,半小时后我停止了它们,它们分别检索了 310,000 个页面和 160,000 个页面.

This error shows up when I have 3 and more python scripts running which constantly retrieve some URL from my server. With 3 scripts I can retrieve ~45,000 pages with 4 and more scripts running it is between 30,000 and 37,000 pages Running only 2 or 1 scripts, I stopped them after half an hour when they retrieved 310,000 pages and 160,000 pages respectively.

我发现了这个类似问题 并尝试更改 http.globalAgent.maxSockets 但没有任何效果.

I've found this similar question and tried changing http.globalAgent.maxSockets but that didn't have any effect.

这是侦听 URL 并从弹性搜索中检索数据的代码部分.

This is the part of the code which listens for the URLs and retrieves the data from elastic search.

app.get('/wiki/:contentId', (req, res) ->
    http.get(elasticSearchUrl(req.params.contentId), (innerRes) ->
        if (innerRes.statusCode != 200)
            res.send(innerRes.statusCode)
            innerRes.resume()
        else
            body = ''
            innerRes.on('data', (bodyChunk) ->
                body += bodyChunk
            )
            innerRes.on('end', () ->
                res.render('page', {'title': req.params.contentId, 'content': JSON.parse(body)._source.html})
            )
    ).on('error', (e) ->
        console.log('Got error: ' + e.message)  # the error is reported here
    )
)

更新:

在深入研究之后,我现在明白了问题的根源.我运行了命令 netstat -an |grep -e tcp -e udp |wc -l 在我的测试运行期间多次,查看使用了多少端口,如帖子 Linux:EADDRNOTAVAIL(地址不可用)错误.我可以观察到,当我收到 EADDRNOTAVAIL 错误时,使用了 56677 个端口(而不是通常的 ~180 个)

After looking more into it, I understand now the root of the problem. I ran the command netstat -an | grep -e tcp -e udp | wc -l several times during my test runs, to see how many ports are used, as described in the post Linux: EADDRNOTAVAIL (Address not available) error. I could observe that at the time I received the EADDRNOTAVAIL-error, 56677 ports were used (instead of ~180 normally)

此外,当仅同时使用 2 个脚本时,使用的端口数在 40,000 (+/- 2,000) 左右达到饱和,这意味着每个脚本使用约 20,000 个端口(即 node-js 清理旧端口的时间在创建新的之前)和运行它的 3 个脚本违反了 56677 个端口(约 60,000 个).这就解释了为什么它在 3 个脚本请求数据时失败,但在 2 个脚本时却没有.

Also when using only 2 simultaneous scripts, the number of used ports is saturated at around 40,000 (+/- 2,000), that means ~20,000 ports are used per script (that is the time when node-js cleans up old ports before new ones are created) and for 3 scripts running it breaches over the 56677 ports (~60,000). This explains why it fails with 3 scripts requesting data, but not with 2.

所以现在我的问题变成了 - 如何强制 node-js 更快地释放端口或始终重用相同的端口(将是更好的解决方案)

So now my question changes to - how can I force node-js to free up the ports quicker or to reuse the same port all the time (would be the preferable solution)

谢谢

推荐答案

目前,我的解决方案是将我的请求选项的 agent 设置为 false 这应该根据文档

For now, my solution is setting the agent of my request options to false this should, according to the documentation

选择退出与代理的连接池,默认请求连接:关闭.

opts out of connection pooling with an Agent, defaults request to Connection: close.

因此,我使用的端口数量不超过 26,000 - 这仍然不是一个很好的解决方案,更何况我不明白为什么重用端口不起作用,但它现在解决了问题.

as a result my number of used ports doesn't exceed 26,000 - this is still not a great solution, even more since I don't understand why reusing of ports doesn't work, but it solves the problem for now.

这篇关于在高负载下在nodejs中连接EADDRNOTAVAIL - 如何更快地释放或重用TCP端口?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆