MongoDB配对和删除顺序数据库条目的最佳方式 [英] MongoDB Best way to pair and delete sequential database entries

查看:159
本文介绍了MongoDB配对和删除顺序数据库条目的最佳方式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

好吧,让我们说我正在做一场盲目的战争游戏!
用户A& B有x数量的士兵



目前有0个DB文档。



用户A发送50名士兵制作DB文件
用户B在用户A后发送62名士兵!



这将创建一个新的DB文档。



我需要最有效/可扩展的方式来查找用户A的文档,将其与用户B的文档进行比较,然后删除这两个文档! (返回结果当然)



这是问题!我可能会有超过10,000名用户在相同的时间发送士兵!我如何成功地完成上述过程而不重叠?



我正在使用MEANstack开发,所以我不限于在数据库中这样做,但显然WebApp必须100%安全!



如果您需要任何其他信息或解释,请让我知道,我会更新此问题



-Thanks

解决方案

这里提到的一件事是你可能不需要做所有的工作您认为您需要的,您的问题可能可以通过 TTL的一些帮助来解决索引,可能还有封顶系列。考虑以下条目:

  {_id:ObjectId(531cf5f3ba53b9dd07756bb7),user:A单位:50} 
{_id:ObjectId(531cf622ba53b9dd07756bb9),user:B,units:62}

所以有两个条目,你插入时,你得到了 _id 的值。所以在开始的时候,A没有人可以反对,但是B的条目会与之前的一样。



ObejctId是 monotonic ,这意味着下一个一直是总是的价值更大最后。所以用插入的数据,只要这样做:

  db.moves.find({
_id:{$ lt :ObjectId(531cf622ba53b9dd07756bb9)},
用户:{$ ne:B}
})。limit(1)

将前面插入的move移动到刚刚完成的当前动作,这样做是因为先前插入的任何将具有 _id with less value 比当前项目。你也可以确保你不是在用户自己的动作中玩,当然你也只能将结果限制在一个文档中。



所以移动将会永远向前移动,当用户C进行下一次插入时,他们从用户B中获取移动,然后用户A将从用户C获得移动,依此类推。



所有这些可能发生在这里是B使下一个移动顺序,你会拿起同样的文件如最后一个请求。但是,这是您的会话设计的一个要点,用于存储最后的结果,并确保您没有得到相同的回复,因此,尽管如此>你想在你的设计中。



这应该足以玩了。但是让我们进入你的删除部分。



自然你会想你想删除的东西,但回到我的初始 这不是必要的。从上方,删除只是清理的一个因素,所以你的收藏没有增长到很大的比例。



如果你应用TTL索引,在很多与本教程说明相同的方式,您的收藏条目将为您清理,并在一段时间后删除。



还可以做什么,特别是考虑到我们正在使用增加的性质 _id 键,这个或多或少是一个队列性质,你可以把它当作一个封顶集合。所以你可以设置一个最大的大小,你可以在任何给定的时间保持的移动。



将两者结合在一起,你会得到只有成长到一定的大小,并会自动为您清理,应该活动减慢一点。这将保持所有的操作快速



底线是删除的并发通过实际上删除删除刚刚播放的文档的需要,您所担心的已被删除。查询保持简单,TTL索引和上限集合可以照顾您的数据管理。



所以你有什么是我在一个非常并发的游戏盲人战争。


Okay so let's say I'm making a game of blind war! Users A & B have x amount of soldiers

There are currently 0 DB docs.

User A sends 50 soldiers making a DB doc User B sends 62 soldiers after user A!

This creates a new DB doc.

I need the most effective/scalable way to lookup user A's doc, compare it to User B's doc then delete both docs! (After returning the result of course)

Here's the problem! I could potentially have 10,000+ users sending soldiers at relatively the same time! How can I successfully complete the above process without overlapping?

I'm using the MEANstack for development so I'm not limited to doing this in the database but obviously the WebApp has to be 100% secure!

If you need any additional info or explanation please let me know and I'll update this question

-Thanks

解决方案

One thing that comes to mind here is you may not need to do all the work that you think you need to, and your problem can probably be solved with a little help from TTL Indexes and possibly capped collections. Consider the following entries:

{ "_id" : ObjectId("531cf5f3ba53b9dd07756bb7"), "user" : "A", "units" : 50 }
{ "_id" : ObjectId("531cf622ba53b9dd07756bb9"), "user" : "B", "units" : 62 }

So there are two entries and you got that _id value back when you inserted. So at start, "A" had no-one to play against, but the entry for "B" will play against the one before it.

ObejctId's are monotonic, which means that the "next" one along is always greater in value from the last. So with the inserted data, just do this:

db.moves.find({ 
    _id: {$lt: ObjectId("531cf622ba53b9dd07756bb9") }, 
    user: { $ne: "B" } 
}).limit(1)

That gives the preceding inserted "move" to the current move that was just made, and does this because anything that was previously inserted will have an _id with less value than the current item. You also make sure that you are not "playing" against the user's own move, and of course you limit the result to one document only.

So the "moves" will be forever moving forward, When the next insert is made by user "C" they get the "move" from user "B", and then user "A" would get the "move" from user "C", and so on.

All that "could" happen here is that "B" make the next "move" in sequence, and you would pick up the same document as in the last request. But that is a point for your "session" design, to store the last "result" and make sure that you didn't get the same thing back, and as such, deal with that however you want to in your design.

That should be enough to "play" with. But let's get to your "deletion" part.

Naturally you "think" you want to delete things, but back to my initial "helpers" this should not be necessary. From above, deletion becomes only a factor of "cleaning-up", so your collection does not grow to massive proportions.

If you applied a TTL index,in much the same way as this tutorial explains, your collection entries will be cleaned up for you, and removed after a certain period of time.

Also what can be done, and especially considering that we are using the increasing nature of the _id key and that this is more or less a "queue" in nature, you could possibly apply this as a capped collection. So you can set a maximum size to how many "moves" you will keep at any given time.

Combining the two together, you get something that only "grows" to a certain size, and will be automatically cleaned for you, should activity slow down a bit. And that's going to keep all of the operations fast.

Bottom line is that the concurrency of "deletes" that you were worried about has been removed by actually "removing" the need to delete the documents that were just played. The query keeps it simple, and the TTL index and capped collection look after you data management for you.

So there you have what is my take on a very concurrent game of "Blind War".

这篇关于MongoDB配对和删除顺序数据库条目的最佳方式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆