mongodb性能不佳 [英] mongodb bad performance

查看:156
本文介绍了mongodb性能不佳的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在使用mongodb,但查询性能却很差(可能需要几秒钟). 情况如下:

I am currently using mongodb, and I see very bad performance of queries (It could take seconds). The scenario is as following:

我有一个结构文件:

{_id:"xxx", userId:"yyy", a:1 ,b:2,  counter:1}    

在测试中:

"userId" value could be {1..200,000}
"a" values could be {1..30}
"b" values could be {1}

因此,我的最大收藏集将为6,000,000 当前为此集合定义了两个索引:default _id and useId

Thus my collection of maximun size will be 6,000,000 Currently there are two indexes defined for this collection : default _id and useId

业务逻辑查询所有用户条目,然后通过增加计数器来更新一个特定的条目(查询更新由"_id"编写).另外,如果这是一个新实体,则有一个插入查询.

The business logic queries for all user entries, and then updates one specific by incrementing the counter (the query update is written by "_id" ). Also if this is a new entity there is an insert query.

我正在使用8g ram在ubuntu上运行mongo 1.8.2

I'm running with mongo 1.8.2 on ubuntu with 8g ram

我有一个主从复制(所有mongo的运行都使用本地磁盘存储,并且在一个带有Tomcat服务器的子网中).当然,所有读操作都将转到次要位置,然后再写入主设备. 我没有测试分片,因为我认为6,000,000不是一个很大的集合,不是吗?

I have a master secondaries replications (all the mongo's runs with local disk storage and in one subnetwork with tomcat server). Of course all the reads go to secondary and writes to master. I didn't tested sharding since i think that 6,000,000 is not a huge collection, isn't it?

此外,我运行jmetter测试,该测试使用不同的userId一次生成500个线程请求.

In addition i run jmetter test that generates 500 threads requests at a time with different userIds.

当我运行mongostat时,我看到锁定的百分比非常高(大约70%) 在大约5-10分钟的负载后,我看到qw(写入队列)为500(作为我打开的连接的数量) 当我停止服务器时,mongo大约需要10-20分钟才能完成所有排队的任务

When i ran mongostat i see that the %locked is very high (about 70%) after about 5-10 minutes of load i see that qw (queue for write) is 500 (as a number of my open connections) When i stop the server it takes the mongo about 10-20 minutes to fulfill all the queued tasks

我还运行了db.serverStatus()并进行了解释,结果看起来还不错. 当我运行db.currentOp()时,我看到正在等待写"锁的查询 我无法将currentOp的输出输出到文件中以对其进行全面分析,因为我从命令行执行了查询,并且只有窗口缓冲区大小.但是从那里我看到了很多更新(通过_id),它们正在等待写锁.

I've also ran db.serverStatus() and explain and the results looks fine. when i run the db.currentOp() i see the queries that are waiting for 'write' lock I could not have the output of currentOp to file to fully analyze it, because i executed the query from the commandline and had only the window buffer size. But from there i saw a lot of updates (by _id) that are waiting to write lock.

我将不胜感激.

另一件事:由于每个查询潜在地会带来30个文档,因此我认为可能存在不同的修改方式,如下所示:

One more things: since each query pottentially will bring 30 documents I think there could be different moddeling as following:

{_id:"xxx", userId:"123", bs: [{b:1, cs[{c:1, cnt:1}, {c:2, cnt:1}}, {{b:2 cs: [{c:1, cnt:1}]}}]

但是当我尝试这种建模时,我无法增加计数器,我只是没有找到正确的方法来做到这一点.我可以做插入和推送芽不能更新 用于以下查询:

But when i tryed this modelling, i could not increment the counter, I simply didn't find the right way to do that. I can do insert and push bud can not update for the following query :

db.coll.update({userId:"123", "bs.b":1, "bs.cs.c":1}, {"bs.cs.cnt" : {$inc : 1})

我在查询中存在关于非法点"的错误

I have an error about the illegal 'dot' in the query

我现在已经堆满了.等待一些好主意

I'm pretty stacked by now. Waiting for some good ideas

非常感谢
朱莉娅

Thanks a lot
Julia

推荐答案

MongoDB具有全局写入锁定.这意味着一次只能进行一次更新.

MongoDB has a global write lock. This means that only one of your updates can proceed at a time.

db.serverStatus()命令可以帮助您诊断全局写入锁定的问题.

The db.serverStatus() command can help you diagnose issues with the global write lock.

您可以尝试以下操作:

1)确保您使用的是mongodb 2.0.它比旧版本具有更好的并发性. 2.2将具有更好的并发性.

1) Make sure you're using mongodb 2.0. It has better concurrency than older versions. 2.2 will have better concurrency yet.

2)将您的写入队列,以便它们是异步的,并使用单个线程执行所有写操作.这可能有助于并发,因为通常只有一个线程一次尝试使用全局写锁.

2) Queue your writes so that they are asynchronous, and perform them all using a single thread. This might help with concurrency, because generally only one thread will be attempting to use the global write lock at a time.

3)如果使用的是最新版本,并且无法使写入成为单线程,请考虑分片.分片不仅仅是大小.这对于写入并发性至少同样重要.如果分片,则每个段将使用自己的全局写锁在其自己的进程中运行.这将使整个系统处理更多的写入.

3) If you're using the latest version, and you can't make your writes single threaded, then consider sharding. Sharding is for much more than just size; it's at least as important for write concurrency. If you shard, then each segment will run in its own process with its own global write lock. This will allow the whole system to process more writes.

这篇关于mongodb性能不佳的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆