MongoDB:内存不足 [英] MongoDB: out of memory
问题描述
我想知道MongoDB的内存消耗情况.我已经阅读了相应的手册部分以及有关该主题的其他问题,但是我认为这种情况有所不同.我可以问你的建议吗?
I am wondering about the MongoDB memory consumption. I have read the corresponding manual sections and the other questions on the topic, but I think this situation is different. May I ask you for your advice?
这是数据库日志文件中的错误:
This is the error from the DB log file:
Fri Oct 26 20:34:00 [conn1] ERROR: mmap private failed with out of memory. (64 bit build)
Fri Oct 26 20:34:00 [conn1] Assertion: 13636:file /docdata/mongodb/data/xxx_letters.5 open/create failed in createPrivateMap (look in log for more information)
这些是数据文件:
total 4.0G
drwxr-xr-x 2 mongodb mongodb 4.0K 2012-10-26 20:21 journal
-rw------- 1 mongodb mongodb 64M 2012-10-25 19:34 xxx_letters.0
-rw------- 1 mongodb mongodb 128M 2012-10-20 22:10 xxx_letters.1
-rw------- 1 mongodb mongodb 256M 2012-10-24 09:10 xxx_letters.2
-rw------- 1 mongodb mongodb 512M 2012-10-26 10:04 xxx_letters.3
-rw------- 1 mongodb mongodb 1.0G 2012-10-26 19:56 xxx_letters.4
-rw------- 1 mongodb mongodb 2.0G 2012-10-03 11:32 xxx_letters.5
-rw------- 1 mongodb mongodb 16M 2012-10-26 19:56 xxx_letters.ns
这是free -tm
的输出:
total used free shared buffers cached
Mem: 3836 3804 31 0 65 2722
-/+ buffers/cache: 1016 2819
Swap: 4094 513 3581
Total: 7930 4317 3612
是否真的需要有足够的系统内存,以便放入最大的数据文件?为什么要增加文件太多? (根据上面显示的顺序,我希望下一个文件为4GB.)我将尝试扩展RAM,但数据最终将增长更多.也许这根本不是一个内存问题?
Is it really necessary to have enough system memory so that the largest data files fit in? Why grow the files that much? (From the sequence shown above, I expect the next file to be 4GB.) I'll try to extend the RAM, but data will eventually grow even more. Or maybe this is not a memory problem at all?
我有一个64位Linux系统,并使用64位MongoDB 2.0.7-rc1.有足够的磁盘空间,CPU负载为0.0.这是uname -a
:
I have got a 64 bit Linux system and use the 64 bit MongoDB 2.0.7-rc1. There is plenty of disk space, the CPU load is 0.0. This is uname -a
:
Linux xxx 2.6.32.54-0.3-default #1 SMP 2012-01-27 17:38:56 +0100 x86_64 x86_64 x86_64 GNU/Linux
推荐答案
ulimit -a解决了这个难题:
ulimit -a solved the mystery:
core file size (blocks, -c) 1
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30619
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 3338968
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30619
virtual memory (kbytes, -v) 6496960
file locks (-x) unlimited
在将最大内存大小和虚拟内存设置为无限制并重新启动所有内容后,此方法起作用.顺便说一句,下一个文件又有2GB.
It worked after setting max memory size and virtual memory to unlimited and restarting everything. BTW, the next file had again 2GB.
很抱歉打扰您,但我很绝望.也许这可以帮助人们研究"类似的问题.
Sorry for bothering you, but I was desperate. Maybe this helps somebody "googling" with a similar problem.
这篇关于MongoDB:内存不足的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!