MongoDB限制内存 [英] MongoDB limit memory

查看:212
本文介绍了MongoDB限制内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用mongo来存储日志文件。 mongoDB和mysql都运行在同一台机器上,虚拟化mongo env不是一个选项。我很快就会遇到这样的问题,因为日志表的增长速度非常快。有没有办法限制mongo的驻留内存,以便它不会吃所有可用的内存,并使mysql服务器过度减慢?



DB机器:Debian'lenny' 5



其他解决方案(请注释):




  • 需要所有的历史数据,我们不能使用封顶的集合,但是我也考虑使用转储和删除旧数据的cron脚本。


  • 我还应该考虑使用



解决方案

在这里有一些关于日志的简单策略。



首先要知道的是,Mongo通常可以处理很多连续的插入,而不需要大量的RAM。原因很简单,您只需插入或更新最近的内容。因此,索引大小增加,但是数据将不断地被分页。



换句话说,您可以将RAM使用分为两个主要部分:index&数据。



如果您正在运行典型的日志记录,则数据部分不断被刷新,因此只有索引确实保留在RAM中。



第二件事要知道的是,您可以通过将日志放入更小的桶中来缓解索引问题。这样想一想。如果您将所有日志收集到带有日期戳的集合中(称为 logs20101206 ),那么您还可以控制RAM中索引的大小。



当您滚过几天时,旧的索引将从RAM中刷新,不会被再次访问,因此它将会很快消失。


但是我也考虑使用转储和删除旧数据的cron脚本


通过日期记录的方法也有助于删除旧数据。在完成数据的三个月内,您只需简单地执行 db.logs20101206.drop(),收集即可消失。请注意,您不收回磁盘空间(全部预分配),但新数据将填满空位。


应该我也考虑使用较小的密钥,如其他论坛所建议的那样?


是的。



事实上,我已经将其内置到我的数据对象中。所以我使用 logs.action logs-> action 访问数据,但在下面,数据实际保存到 logs.a 。在领域上花费更多的空间比价值观真的很容易,所以值得缩小田地,并尝试将其抽出其他地方。


I am using mongo for storing log files. Both mongoDB and mysql are running on the same machine, virtualizing mongo env is not an option. I am afraid I will soon run into perf issues as the logs table grows very fast. Is there a way to limit resident memory for mongo so that it won't eat all available memory and excessively slow down the mysql server?

DB machine: Debian 'lenny' 5

Other solutions (please comment):

  • As we need all historical data, we can not use capped collections, but I am also considering using a cron script that dumps and deletes old data

  • Should I also consider using smaller keys, as suggested on other forums?

解决方案

Hey Vlad, you have a couple of simple strategies here regarding logs.

The first thing to know is that Mongo can generally handle lots of successive inserts without a lot of RAM. The reason for this is simple, you only insert or update recent stuff. So the index size grows, but the data will be constantly paged out.

Put another way, you can break out the RAM usage into two major parts: index & data.

If you're running typical logging, the data portion is constantly being flushed away, so only the index really stays in RAM.

The second thing to know is that you can mitigate the index issue by putting logs into smaller buckets. Think of it this way. If you collect all of the logs into a date-stamped collection (call it logs20101206), then you can also control the size of the index in RAM.

As you roll over days, the old index will flush from RAM and it won't be accessed again, so it will simply go away.

but I am also considering using a cron script that dumps and deletes old data

This method of logging by days also helps delete old data. In three months when you're done with the data you simply do db.logs20101206.drop() and the collection instantly goes away. Note that you don't reclaim disk space (it's all pre-allocated), but new data will fill up the empty spot.

Should I also consider using smaller keys, as suggested on other forums?

Yes.

In fact, I have it built into my data objects. So I access data using logs.action or logs->action, but underneath, the data is actually saved to logs.a. It's really easy to spend more space on "fields" than on "values", so it's worth shrinking the "fields" and trying to abstract it away elsewhere.

这篇关于MongoDB限制内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆