Google云端存储OAuth2 API超出了用户速率限制 [英] User Rate Limit Exceeded for Google Cloud Storage OAuth2 API

查看:375
本文介绍了Google云端存储OAuth2 API超出了用户速率限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用Google API PHP客户端( http://code.google .com/p/google-api-php-client/)进行OAuth请求-获取新的访问令牌.

I use the Google API PHP Client (http://code.google.com/p/google-api-php-client/) to make OAuth requests - to get a new Access Token.

我已经缓存了刷新令牌,并使用它来生成新的访问令牌.我已经阅读了文档( https://developers.google.com/accounts/docs/OAuth2 https://developers.google.com/storage/docs/developer-guide ),它只讨论刷新令牌的限制(每个客户端/用户组合一个限制,所有客户端上每个用户一个限制),但没有关于访问令牌限制的限制(除了访问令牌仅是有效的事实)一个小时).

I have cached the Refresh Token and use this to generate a new Access Token. I have gone over the documentation (https://developers.google.com/accounts/docs/OAuth2, https://developers.google.com/storage/docs/developer-guide) and it only talks about limits on the Refresh Token (one limit per client/user combination, and another per user across all clients) but nothing about Access Token limits (except for the fact that an Access Token is only valid for an hour).

我正在尝试计算数千个存储桶中的存储桶大小使用情况.我正在尝试并行化此任务以减少时间-我通过为每个存储桶生成一个新进程来完成此任务,并且每个进程都请求一个新的访问令牌.我之所以这样做,是因为我假设发出的访问令牌的数量没有限制,并且对于具有很多对象的存储桶,计算时间+潜在的指数退避时间理论上可能会超过访问令牌的寿命.

I'm trying to calculate bucket size usage across thousands of buckets. I'm trying to parallelize this task to cut down on time - I do this by spawning a new process for each bucket and each process requests a new Access Token. I do this because of my assumption that there is no limit on the number of Access Tokens issued and because, for a bucket with lots and lots of objects, the calculation time + potential exponential backoff time could theoretically exceed the lifetime of the Access Token.

但是当我尝试执行此操作时,我看到此错误:

But when I try to do this, I see this error:

Error No: 1
Error on Line: 242
Error Message: Uncaught exception 'apiAuthException' with message 'Error refreshing the OAuth2 token, message: 
<HTML>
<HEAD>
<TITLE>User Rate Limit Exceeded</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>User Rate Limit Exceeded</H1>
<H2>Error 403</H2>
</BODY>
</HTML>

这是因为我产生了大量(目前为16个)访问令牌吗?

Is this because I'm spawning a lot (16 at the moment) of Access Tokens?

如果不是,那么导致此错误的原因是什么?解决此错误的最佳方法是什么?

If not, what then, is causing this error? What's the best way to get around this error?

是否有一个Google文档页面记录了用户费率限制?

Is there a Google Documentation page that documents the User Rate Limits?

推荐答案

基于刷新令牌的访问令牌没有限制.但是,对 rate 的限制是可以请求访问令牌的.您可以基于一个刷新令牌同时请求数千个访问令牌,而这些令牌都同时有效,但是如果同时访问令牌的请求数超过qps,您将获得速率限制.

There are no limits on access tokens based on a refresh token. There's however, a limit on the rate at which access tokens can be requested. You can request thousands of access tokens based on a single refresh token where they are all simultaneously valid, but if you exceed a few qps of simultaneous access tokens requests you will get rate limit exceeded.

如上所述,如果单个access_token对所有这些请求都有效,则可以在多个请求中并行使用.

As mentioned above, a single access_token can be re-used in parallel in multiple requests, if it's valid for all of these requests.

access_tokens的限制可能会更改,因此不会发布.正确的客户端实现是进行指数补偿,以确保在存在速率限制更改时的正确性.但是,在您的情况下,由于所有令牌共享相同的作用域和使用上下文,因此您应该能够成功重用相同的令牌.

The limits on access_tokens are not published as they are subject to change. The correct client implementation is to engage in exponential backoff to ensure correctness in the presence of rate limit changes. However, in your case, since all tokens share the same scope and usage context, you should be able to re-use the same token successfully.

这篇关于Google云端存储OAuth2 API超出了用户速率限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆