Mongodb在内存不足时终止 [英] Mongodb terminates when it runs out of memory

查看:422
本文介绍了Mongodb在内存不足时终止的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我具有以下配置:


  • 运行三个Docker容器的主机:


    • Mongodb

    • Redis

    • 使用前两个容器存储数据的程序

    Redis和Mongodb都用于存储大量数据。我知道Redis需要将其所有数据保留在RAM中,对此我很好。不幸的是,发生的事情是Mongo开始占用大量RAM,并且一旦主机RAM已满(我们在这里谈论的是32GB),Mongo或Redis都会崩溃。

    Both Redis and Mongodb are used to store huge amounts of data. I know Redis needs to keep all its data in RAM and I am fine with this. Unfortunately, what happens is that Mongo starts taking up a lot of RAM and as soon as the host RAM is full (we're talking about 32GB here), either Mongo or Redis crashes.

    我已阅读以下有关此问题的问题:

    I have read the following previous questions about this:


    1. 限制MongoDB RAM的使用量:显然,大多数RAM被WiredTiger缓存用完了

    2. MongoDB限制内存:这里的问题显然是日志数据

    3. 限制MongoDB中的RAM内存使用量:在这里,他们建议限制mongo的内存,以便它使用缓存/日志/数据的内存量较小

    4. MongoDB使用了过多的内存:在这里说它的WiredTiger缓存系统倾向于使用尽可能多的RAM以提供更快的访问速度。他们还说完全可以限制WiredTiger缓存的大小,因为它可以非常有效地处理I / O操作

    5. 是否有任何选项可以限制mongodb的内存使用?:再次缓存,他们还添加了 MongoDB使用LRU(最近最少使用)缓存算法来确定要发布的页面,您将在这两个问题中找到更多信息

    6. MongoDB索引/ RAM关系:引用: MongoDB将其可以保留的索引保留在RAM中。它们将以LRU交换。您会经常看到文档,建议您将工作集保留在内存中:如果实际上要访问的索引部分适合内存,那会没事的。

    7. 如何发布Mongodb使用的缓存?:与5中相同的答案。

    1. Limit MongoDB RAM Usage: apparently most RAM is used up by the WiredTiger cache
    2. MongoDB limit memory: here apparently the problem was log data
    3. Limit the RAM memory usage in MongoDB: here they suggest to limit mongo's memory so that it uses a smaller amount of memory for its cache/logs/data
    4. MongoDB using too much memory: here they say it's WiredTiger caching system which tends to use as much RAM as possible to provide faster access. They also state it's completely okay to limit the WiredTiger cache size, since it handles I/O operations pretty efficiently
    5. Is there any option to limit mongodb memory usage?: caching again, they also add MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions
    6. MongoDB index/RAM relationship: quote: MongoDB keeps what it can of the indexes in RAM. They'll be swaped out on an LRU basis. You'll often see documentation that suggests you should keep your "working set" in memory: if the portions of index you're actually accessing fit in memory, you'll be fine.
    7. how to release the caching which is used by Mongodb?: same answer as in 5.

    现在我似乎从所有人那里都明白了这些答案是:

    Now what I appear to understand from all these answers is that:


    1. 为了更快地访问,Mongo最好将所有索引都放入RAM。但是,就我而言,我的索引非常好,因为我有相当快的SSD,所以它部分驻留在磁盘上。

    2. RAM主要用于Mongo缓存。

    考虑到这一点,我期望Mongo尝试使用尽可能多的RAM空间,但也能够以较少的RAM空间运行并从磁盘中获取大部分内容。但是,通过使用-memory -memory-swap ,但是Mongo并没有从磁盘中获取内容,而是在内存耗尽时立即崩溃了。

    Considering this, I was expecting Mongo to try and use as much RAM space as possible but being able to function also with few RAM space and fetching most things from disk. However, I limited Mongo Docker container's memory (to 8GB for instance), by using --memory and --memory-swap, but instead of fetching stuff from disk, Mongo just crashed as soon as it ran out of memory.

    如何强制Mongo仅使用可用内存并可以从磁盘中获取所有不适合内存的内容?

    How can I force Mongo to use only the available memory and to fetch from disk everything that does not fit into memory?

    推荐答案

    感谢@AlexBlex的评论,我解决了我的问题。显然,问题在于Docker将容器的RAM限制为 8GB ,但wiredTiger存储引擎仍在尝试使用 50%-1GB 用于其缓存的系统总RAM(在我的情况下为 15 GB )。

    Thanks to @AlexBlex's comment I solved my issue. Apparently the problem was that Docker limited the container's RAM to 8GB but the wiredTiger storage engine was still trying to use up 50% - 1GB of the total system RAM for it's cache (which in my case would have been 15 GB).

    通过使用此配置选项来限制wiredTiger的缓存大小值小于Docker分配的值即可解决问题。

    Capping wiredTiger's cache size by using this configuration option to a value less than what Docker was allocating solved the problem.

    这篇关于Mongodb在内存不足时终止的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆