Mongo聚合框架:最后阶段$ out操作的锁定级别是多少? [英] Mongo aggregation framework: what is the lock level of the last stage $out operation?
问题描述
使用Mongo的聚合管道,可以使用$out
阶段将查询结果写入集合(现有的或新的)
Using Mongo's aggregation pipeline it is possible to write query result to a collection (existent or new) using $out
stage, like that
db.my_collection.aggregate([ { $match: { my_field: 'my_value' } }, { $out: 'my_new_collection' } ])
问题是Mongo写入my_new_collection
时会使用哪种锁?它是常规"写锁,还是 global 锁,例如Map Reduce?
The question is what kind of lock does Mongo use while writing to my_new_collection
? Is it a 'regular' write lock, or a global lock, like Map Reduce?
推荐答案
始终存在一定级别的锁定,具体取决于您的MongoDB版本是可能是集合级别,还是旧数据库级别,甚至可能是文档级别. WiredTiger存储引擎.但是, $out
确实会产生屈服,因此可以使用单个文档来自管道的输出,而不是一劳永逸,因此每个文档的更新都是原子的.
There is always a certain level of locking that depending on your MongoDB version is either likely to be collection or in older the database level, or even possibly document level with the WiredTiger storage engine. The $out
does however yield on writes, so indvidual documents are output from the pipeline and not all in one go, so each update is atomic per document.
即使mapReduce命令具有此选项,您可以在其中将"nonAtomic"设置为mapReduce的输出集合将表现出相同行为的条件.
Even the mapReduce command has this option, where you can set "nonAtomic" as a condition where the output collection of a mapReduce will exhibit the same behavior.
使用$out
时要注意的一件事是,当使用替换"模式执行该阶段时,将从集合中删除所有文档(而不替换任何现有的索引).因此,在进行聚合操作时,尝试从设置有替换"集的集合中读取或写入数据很可能会失败(或产生意外结果).
The one thing to be aware of with $out
will remove all documents ( not replace any existing indexes ) from a collection as that stage executes when using the "replace" mode. So attempting to read or write from a collection directed with "replace" set is very likely to to fail (or produce unexpected results) whilst the aggregation operation is in progress.
文档中指出了与分片收集和封顶收集有关的其他限制.
The other limitations relating to sharded collections and capped collections are noted in the documentation.
这篇关于Mongo聚合框架:最后阶段$ out操作的锁定级别是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!