Couchbase,减少太大错误 [英] Couchbase, reduction too large error
问题描述
在工作中,我使用了沙发床,但遇到了一些问题。从某些设备中,数据传到了ouchbase,然后我调用了聚合视图。该视图必须通过2个键来聚合值:timestamp和deviceId。
在我尝试合计超过1万个值之前,一切都很好。在这种情况下,我有缩小错误
On my work I using couchbase and I have some problems. From some devices data come to couchbase, and after I calling aggregate view. This view must aggregate values by 2 keys: timestamp and deviceId. Everything was fine, before I have tried to aggregate more then 10k values. In this case I have reduction error
地图功能:
function(doc, meta)
{
if (doc.type == "PeopleCountingIn"&& doc.undefined!=true)
{
emit(doc.id+"@"+doc.time, [doc.in, doc.out, doc.id, doc.time, meta.id]);
}
}
缩小功能:
function(key, values, rereduce)
{
var result =
{
"id":0,
"time":0,
"in" : 0,
"out" : 0,
"docs":[]
};
if (rereduce)
{
result.id=values[0].id;
result.time = values[0].time;
for (i = 0; i<values.length; i++)
{
result.in = result.in + values[i].in;
result.out = result.out + values[i].out;
for (j = 0; j < values[i].docs.length; j++)
{
result.docs.push(values[i].docs[j]);
}
}
}
else
{
result.id = values[0][2];
result.time = values[0][3];
for(i = 0; i < values.length; i++)
{
result.docs.push(values[i][4]);
result.in = result.in + values[i][0];
result.out = result.out + values[i][1];
}
}
return result;
}
文档样本:
{
"id": "12292228@0",
"time": 1401431340,
"in": 0,
"out": 0,
"type": "PeopleCountingIn"
}
更新
输出文档:
{"rows":[
{"key":"12201774@0@1401144240","value":{"id":"12201774@0","time":1401144240,"in":0,"out":0,"docs":["12231774@0@1401546080@1792560127"]}},
{"key":"12201774@0@1401202080","value":{"id":"12201774@0","time":1401202080,"in":0,"out":0,"docs":["12201774@0@1401202080@1792560840"]}}
]
}
}
在 docs数组长度大于100的情况下发生错误。我认为在这种情况下,工作会减少功能。有什么办法可以解决这个错误,使该数组的数量减少了?
Error occurs in the case where "docs" array length more then 100. And I think in that cases working rereduce function. Is there some way to fix this error exept making count of this array less?
推荐答案
输出有很多限制地图和减少函数,以防止索引花费太长时间和/或增加太大。
There are a number of limits on the output of map & reduce functions, to prevent indexes taking too long and/or growing too large.
这些正在被添加到官方文档中,但与此同时引用了该问题( MB-11668 )跟踪文档更新:
These are in the process of being added to the official documentation, but in the meantime quoting from the issue (MB-11668) tracking the documentation update:
1)indexer_max_doc_size-大于此值的文档将被
索引器跳过。遇到这样的文档时,会记录一条消息(带有文档ID,其大小,存储桶名称,视图名称等)
。值0表示没有限制(例如以前
之前的限制)。当前默认值为1048576字节(1Mb)。
1) indexer_max_doc_size - documents larger then this value are skipped by the indexer. A message is logged (with document ID, its size, bucket name, view name, etc) when such a document is encountered. A value of 0 means no limit (like what it used to be before). Current default value is 1048576 bytes (1Mb).
2)max_kv_size_per_doc-可以为
a单个文档发出的KV对的最大总大小(字节)一个视图。超过此限制时,将记录消息(带有
文档ID,其大小,存储区名称,视图名称等)。值0表示没有限制(例如以前的
)。当前默认值为1048576字节(1Mb)
2) max_kv_size_per_doc - maximum total size (bytes) of KV pairs that can be emitted for a single document for a single view. When such limit is passed, message is logged (with document ID, its size, bucket name, view name, etc). A value of 0 means no limit (like what it used to be before). Current default value is 1048576 bytes (1Mb)
编辑:此外,对于单个缩减的大小( reduce()
的输出。我建议您重新操作reduce函数以返回此限制内的数据。请参见 MB-7952 进行技术讨论,以了解为什么会这样。
Additionally, there is a limit of 64kB for the size of a single reduction (output of the reduce()
. I suggest you re-work your reduce function to return data within this limit. See MB-7952 for a technical discussion on why this is the case.
这篇关于Couchbase,减少太大错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!