MongoDB 限制内存 [英] MongoDB limit memory

查看:44
本文介绍了MongoDB 限制内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用 mongo 来存储日志文件.mongoDB 和 mysql 都在同一台机器上运行,虚拟化 mongo env 不是一个选项.恐怕我很快就会遇到性能问题,因为日志表增长得非常快.有没有办法限制 mongo 的常驻内存,这样它就不会吃掉所有可用内存并过度减慢 mysql 服务器的速度?

I am using mongo for storing log files. Both mongoDB and mysql are running on the same machine, virtualizing mongo env is not an option. I am afraid I will soon run into perf issues as the logs table grows very fast. Is there a way to limit resident memory for mongo so that it won't eat all available memory and excessively slow down the mysql server?

DB 机器:Debian 'lenny' 5

DB machine: Debian 'lenny' 5

其他解决方案(请评论):

Other solutions (please comment):

  • 因为我们需要所有的历史数据,所以我们不能使用上限集合,但我也在考虑使用一个转储和删除旧数据的 cron 脚本

  • As we need all historical data, we can not use capped collections, but I am also considering using a cron script that dumps and deletes old data

我是否也应该考虑使用其他论坛上建议的较小的密钥?

Should I also consider using smaller keys, as suggested on other forums?

推荐答案

弗拉德,你有几个关于日志的简单策略.

Hey Vlad, you have a couple of simple strategies here regarding logs.

首先要知道的是,Mongo 通常可以在没有大量 RAM 的情况下处理大量连续插入.原因很简单,你只插入或更新最近的东西.所以索引大小会增加,但数据会不断被换出.

The first thing to know is that Mongo can generally handle lots of successive inserts without a lot of RAM. The reason for this is simple, you only insert or update recent stuff. So the index size grows, but the data will be constantly paged out.

换句话说,您可以将 RAM 使用情况分为两个主要部分:索引 &数据.

Put another way, you can break out the RAM usage into two major parts: index & data.

如果您正在运行典型的日志记录,数据部分会不断被刷新,因此只有索引真正留在 RAM 中.

If you're running typical logging, the data portion is constantly being flushed away, so only the index really stays in RAM.

要知道的第二件事是,您可以通过将日志放入较小的存储桶来缓解索引问题.这么想吧.如果您将所有日志收集到一个带日期戳的集合中(称为 logs20101206),那么您还可以控制 RAM 中索引的大小.

The second thing to know is that you can mitigate the index issue by putting logs into smaller buckets. Think of it this way. If you collect all of the logs into a date-stamped collection (call it logs20101206), then you can also control the size of the index in RAM.

随着时间的推移,旧索引将从 RAM 中刷新并且不会再次被访问,因此它会消失.

As you roll over days, the old index will flush from RAM and it won't be accessed again, so it will simply go away.

但我也在考虑使用转储和删除旧数据的 cron 脚本

but I am also considering using a cron script that dumps and deletes old data

这种按天记录的方法也有助于删除旧数据.三个月后,当您处理完数据时,您只需执行 db.logs20101206.drop() 并且集合立即消失.请注意,您不会回收磁盘空间(都是预先分配的),但新数据会填满空位.

This method of logging by days also helps delete old data. In three months when you're done with the data you simply do db.logs20101206.drop() and the collection instantly goes away. Note that you don't reclaim disk space (it's all pre-allocated), but new data will fill up the empty spot.

我是否也应该考虑使用其他论坛上建议的较小的密钥?

Should I also consider using smaller keys, as suggested on other forums?

是的.

事实上,我已将其内置到我的数据对象中.所以我使用 logs.actionlogs->action 访问数据,但在下面,数据实际上保存到 logs.a.在字段"上花费更多空间比在值"上花费更多的空间真的很容易,因此缩小字段"并尝试将其抽象到其他地方是值得的.

In fact, I have it built into my data objects. So I access data using logs.action or logs->action, but underneath, the data is actually saved to logs.a. It's really easy to spend more space on "fields" than on "values", so it's worth shrinking the "fields" and trying to abstract it away elsewhere.

这篇关于MongoDB 限制内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆