MongoDB中超过2GB的数据库 [英] Database over 2GB in MongoDB

查看:92
本文介绍了MongoDB中超过2GB的数据库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个基于文件的程序,要转换为使用文档数据库,特别是MongoDB.问题是,在32位计算机上,MongoDB限于2GB(根据 http ://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F ),我们的许多用户将拥有超过2GB的数据.有办法让MongoDB以某种方式使用多个文件吗?

We've got a file-based program we want to convert to use a document database, specifically MongoDB. Problem is, MongoDB is limited to 2GB on 32-bit machines (according to http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F), and a lot of our users will have over 2GB of data. Is there a way to have MongoDB use more than one file somehow?

我认为也许我可以在一台机器上实现分片,这意味着我将在同一台机器上运行多个mongod,并且它们会以某种方式进行通信.那行得通吗?

I thought perhaps I could implement sharding on a single machine, meaning I'd run more than one mongod on the same machine and they'd somehow communicate. Could that work?

推荐答案

在单个节点上拥有超过2GB的唯一方法是运行多个mongod进程.因此,分片是一种选择(如您所说),或者在进程之间进行一些手动分区.

The only way to have more than 2GB on a single node is to run multiple mongod processes. So sharding is one option (like you said) or doing some manual partitioning across processes.

这篇关于MongoDB中超过2GB的数据库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆