确保索引 mongo 时打开的文件过多 [英] Too many open files while ensure index mongo

查看:23
本文介绍了确保索引 mongo 时打开的文件过多的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在 mongo 集合上创建文本索引.我写

I would like to create text index on mongo collection. I write

db.test1.ensureIndex({'text':'text'})

然后我在mongod进程中看到了

and then i saw in mongod process

Sun Jan  5 10:08:47.289 [conn1] build index library.test1 { _fts: "text", _ftsx: 1 }
Sun Jan  5 10:09:00.220 [conn1]         Index: (1/3) External Sort Progress: 200/980    20%
Sun Jan  5 10:09:13.603 [conn1]         Index: (1/3) External Sort Progress: 400/980    40%
Sun Jan  5 10:09:26.745 [conn1]         Index: (1/3) External Sort Progress: 600/980    61%
Sun Jan  5 10:09:37.809 [conn1]         Index: (1/3) External Sort Progress: 800/980    81%
Sun Jan  5 10:09:49.344 [conn1]      external sort used : 5547 files  in 62 secs
Sun Jan  5 10:09:49.346 [conn1] Assertion: 16392:FileIterator can't open file: data/_tmp/esort.1388912927.0//file.233errno:24 Too many open files

我在 MaxOSX 10.9.1 上工作.请帮忙.

I work on MaxOSX 10.9.1. Please help.

推荐答案

注意:此解决方案适用于/可能不适用于最近的 Mac OS(评论表明 >10.13?).显然,出于安全目的进行了更改.

NB: This solution does/may not work with recent Mac OSs (comments indicate >10.13?). Apparently, changes have been made for security purposes.

从概念上讲,该解决方案适用 - 以下是一些讨论来源:

Conceptually, the solution applies - following are a few sources of discussion:

--

我遇到了同样的问题(执行不同的操作,但仍然出现打开的文件过多"错误),正如 lese 所说,它似乎已降低到机器运行的maxfiles"限制蒙哥.

I've had the same problem (executing a different operation, but still, a "Too many open files" error), and as lese says, it seems to be down to the 'maxfiles' limit on the machine running mongod.

在 Mac 上,最好使用以下命令检查限制:

On a mac, it is better to check limits with:

sudo launchctl limit

这给了你:

<limit name> <soft limit> <hard limit>
    cpu         unlimited      unlimited      
    filesize    unlimited      unlimited      
    data        unlimited      unlimited      
    stack       8388608        67104768       
    core        0              unlimited      
    rss         unlimited      unlimited      
    memlock     unlimited      unlimited      
    maxproc     709            1064           
    maxfiles    1024           2048  

我为解决这个问题所做的只是暂时将限制设置得更高(我的最初是软:256,硬:1000 或类似的东西):

What I did to get around the problem was to temporarily set the limit higher (mine was originally something like soft: 256, hard: 1000 or something weird like that):

sudo launchctl limit maxfiles 1024 2048

然后重新运行查询/索引操作,看看它是否中断.如果没有,并且为了保持更高的限制(当您退出您设置的 shell 会话时,它们将重置),请使用以下行创建一个/etc/launchd.conf"文件:

Then re-run the query/indexing operation and see if it breaks. If not, and to keep the higher limits (they will reset when you log out of the shell session you've set them on), create an '/etc/launchd.conf' file with the following line:

limit maxfiles 1024 2048

(或者将该行添加到您现有的 launchd.conf 文件中,如果您已经有的话).

(or add that line to your existing launchd.conf file, if you already have one).

这将在登录时在每个 shell 上通过 launchctl 设置 maxfile.

This will set the maxfile via launchctl on every shell at login.

这篇关于确保索引 mongo 时打开的文件过多的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆