请求率大 [英] Request rate is large

查看:74
本文介绍了请求率大的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Azure documentdb并通过快递服务器上的node.js访问它,当我循环查询时,少量的数百个就没有问题. 但是当循环中查询量稍大时,说大约一千个以上

Im using Azure documentdb and accessing it through my node.js on express server, when I query in loop, low volume of few hundred there is no issue. But when query in loop slightly large volume, say around thousand plus

我得到部分结果(不一致,每次运行结果值都不相同.可能是由于Node.js的异步特性) 几次结果后,它就会因该错误而崩溃

I get partial results (inconsistent, every time I run result values are not same. May be because of asynchronous nature of Node.js) after few results it crashes with this error

正文:'{代码":"429",消息":消息:{\"错误\:[\"请求率大\]} \ r \ nActivityId:1fecee65-0bb7-4991 -a984-292c0d06693d,请求URI:/apps/cce94097-e5b2-42ab-9232-6abd12f53528/services/70926718-b021-45ee-ba2f-46c4669d952e/partitions/dd46d670-ab6f-4dca-bbbb-937647b03d97/replicas/130 '}

body: '{"code":"429","message":"Message: {\"Errors\":[\"Request rate is large\"]}\r\nActivityId: 1fecee65-0bb7-4991-a984-292c0d06693d, Request URI: /apps/cce94097-e5b2-42ab-9232-6abd12f53528/services/70926718-b021-45ee-ba2f-46c4669d952e/partitions/dd46d670-ab6f-4dca-bbbb-937647b03d97/replicas/130845018837894542p"}' }

意思是DocumentDb每秒无法处理1000个以上的请求? 在一起给我留下了对NoSQL技术的不好印象..它是DocumentDB的不足之处吗?

Meaning DocumentDb fail to handle 1000+ request per second? All together giving me a bad impression on NoSQL techniques.. is it short coming of DocumentDB?

推荐答案

正如Gaurav所建议的那样,您可以通过提高定价层来避免此问题,但是即使您进入最高层,也应该能够处理429个错误.当您收到429错误时,响应将包含"x-ms-retry-after-ms"标头.这将包含一个数字,表示重试导致错误的请求之前应等待的毫秒数.

As Gaurav suggests, you may be able to avoid the problem by bumping up the pricing tier, but even if you go to the highest tier, you should be able to handle 429 errors. When you get a 429 error, the response will include a 'x-ms-retry-after-ms' header. This will contain a number representing the number of milliseconds that you should wait before retrying the request that caused the error.

我在 documentdb-utils node.js包中编写了用于处理此问题的逻辑.您可以尝试使用documentdb-utils,也可以自己复制它.这是一个小例子.

I wrote logic to handle this in my documentdb-utils node.js package. You can either try to use documentdb-utils or you can duplicate it yourself. Here is a snipit example.

createDocument = function() {
   client.createDocument(colLink, document, function(err, response, header) {
        if (err != null) {
            if (err.code === 429) {
                var retryAfterHeader = header['x-ms-retry-after-ms'] || 1;
                var retryAfter = Number(retryAfterHeader);
                return setTimeout(toRetryIf429, retryAfter);
            } else {
                throw new Error(JSON.stringify(err));
            }
        } else {
            log('document saved successfully');
        }
    });
};

注意,在上面的示例中,document在createDocument的范围内.这样可以使重试逻辑更加简单,但是如果您不喜欢使用范围广泛的变量,则可以将document传入createDocument,然后在setTimeout调用中将其传入lambda函数.

Note, in the above example document is within the scope of createDocument. This makes the retry logic a bit simpler, but if you don't like using widely scoped variables, then you can pass document in to createDocument and then pass it into a lambda function in the setTimeout call.

这篇关于请求率大的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆