ArangoDB 打开的文件太多 [英] ArangoDB Too many open files

查看:26
本文介绍了ArangoDB 打开的文件太多的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

几天以来,我们的 ArangoDB 安装遇到了问题.启动后几分钟/最多一小时,到数据库的所有连接都被拒绝.arango 日志文件说有打开的文件太多".lsof | grep arango | wc -l"显示数据库有大约 50,000 个打开的文件句柄,这远远低于最大值.linux系统允许(约3m).有没有人知道这个错误是从哪里来的?

since a few days we encounter a problem with our ArangoDB installation. A few minutes/up to an hour after start up all connections to the database are refused. The arango log file says that there are "Too many open files". A "lsof | grep arango | wc -l" shows that the database has around 50,000 open file handles, which is a lot under the max. allowed by the linux system (around 3m). Has anyone an idea where this error comes from?

我们使用的是带有 3.13 内核的 Ubuntu Linux.30 GB RAM 和三个内核.该数据库仍然非常小,大约有 1.5m 条目和 50GB 大小.

We are using a Ubuntu Linux with a 3.13 kernel. 30 GB RAM and three cores. The database is still very small with around 1,5m entries and a size of 50GB.

谢谢,塞卡纳

netstat -anpt | fgrep 2480"显示:

"netstat -anpt | fgrep 2480" shows:

root@syssec-graphdb-001-test:~# netstat -anpt | fgrep 2480
tcp        0      0 10.215.17.193:2480      0.0.0.0:*               LISTEN               7741/arangod
tcp        0      0 10.215.17.193:2480      10.215.50.30:53453      ESTABLISHED          7741/arangod
tcp        0      0 10.215.17.193:2480      10.215.50.31:49299      ESTABLISHED          7741/arangod
tcp        0      0 10.215.17.193:2480      10.215.50.30:53155      ESTABLISHED          7741/arangod

"ulimit -n" 的结果是 1024,所以我认为 ~50,000 都是 arango 进程.

"ulimit -n" has a result of 1024, so I think that the ~50,000 are all arango processes together.

数据库死亡前日志文件的最后几行:

Last lines in log file before the database died:

2015-05-26T12:20:43Z [9672] ERROR cannot open datafile '/data/arangodb/databases/database-235999516/collection-28464454696/datafile-18806474509149.db': 'Too many open files'
2015-05-26T12:20:43Z [9672] ERROR cannot open datafile '/data/arangodb/databases/database-235999516/collection-28464454696/datafile-18806474509149.db': Too many open files
2015-05-26T12:20:43Z [9672] DEBUG [arangod/VocBase/collection.cpp:1632] cannot open '/data/arangodb/databases/database-235999516/collection-28464454696', check failed
2015-05-26T12:20:43Z [9672] ERROR cannot open document collection from path '/data/arangodb/databases/database-235999516/collection-28464454696'

推荐答案

看起来增加最大值是有意义的.允许进程管理的打开文件数.鉴于规定的数据库大小约为 50 GB,1024(可能是默认值)的值似乎太低了.

It looks like it will make sense to increase the max. number of open files a process is allowed to manage. Given the stated database size of around 50 GB, the (presumably default) value of 1024 seems to be too low.

arangod 将为每个并行客户端连接需要一个文件描述符.这可能不是很多,但面对 HTTP 保持活动连接,这可能已经说明了几个文件描述符.

arangod will require one file descriptor for each parallel client connection. That may not be many, but in the face of HTTP keep-alive connections this could already account for several file descriptors.

此外,活动集合的每个数据文件都需要进行内存映射,并且还需要消耗一个文件描述符.在默认数据文件大小为 32 MB 的情况下,50 GB 的数据库大小(在磁盘上)将消耗 1,600 个文件描述符:

Additionally, each datafile of an active collection will need to be memory-mapped and cost one file descriptor as well. With the default datafile size of 32 MB, a database size of 50 GB (on disk) will already consume 1,600 file descriptors:

50 GB database size / (32 MB default size / 1 datafile) = 1600 datafiles

因此为 arangod 用户和环境增加 ulimit -n 值是有意义的.您可以通过使用选项 --server.descriptors-minimum 启动来确认 arangod 实际上可以使用配置的文件描述符数量,例如

Increasing the ulimit -n value for the arangod user and environment therefore will make sense. You can confirm that arangod can actually use the configured number of file descriptors by starting it with option --server.descriptors-minimum <value>, e.g.

--server.descriptors-minimum 32768 

对于那么多文件描述符.如果 arangod 不能有效地使用指定数量的文件描述符,它将在启动时失败并出现致命错误.当然这个选项也可以放到arangod.conf文件中.

for that many file descriptors. If arangod cannot effectively use that specified amount of file descriptors, it will fail at start with a fatal error. Of course that option can also be put into the arangod.conf file.

此外,可以通过集合的 journalSize 参数增加(新)数据文件的默认大小.这目前无济于事,但会减少将来保存数据所需的文件描述符数量.

Additionally, the default size for (new) datafiles can be increased via the journalSize parameter for collections. That won't help right now, but will lower the number of required file descriptors for data saved in the future.

这篇关于ArangoDB 打开的文件太多的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆