Google App Engine超时:数据存储操作超时,或数据暂时不可用 [英] Google App Engine timeout: The datastore operation timed out, or the data was temporarily unavailable

查看:187
本文介绍了Google App Engine超时:数据存储操作超时,或数据暂时不可用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



<$ p $每天有5/6次访问我的应用程序日志,这是一个常见的异常情况。尝试存储统计信息的数据库错误
Traceback(最近调用最后一次):
文件/base/data/home/apps/stackprinter/1b.347728306076327132/app/utility /worker.py,第36行,在deferred_store_print_statistics
dbcounter.increment()
文件/base/data/home/apps/stackprinter/1b.347728306076327132/app/db/counter.py中,第28行,增量
db.run_in_transaction(txn)
文件/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py,第1981行,在RunInTransaction $ b中$ b DEFAULT_TRANSACTION_RETRIES,函数,* args,** kwargs)
RunInTransactionCustomRetries
中的文件/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py,第2067行ok,result = _DoOneTry(new_connection,function,args,kwargs)
文件/ base / python_runtime / python_lib / versions / 1 / g oogle / api / datastore.py,第2105行,在_DoOneTry
if new_connection.commit():
文件/ base / python_runtime / python_lib / versions / 1 / google / appengine / datastore / datastore_rpc.py,第1585行,提交
返回rpc.get_result()
文件/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py,第530行,在get_result
中返回self .__ get_result_hook(self)
在__commit_hook $ b $中的文件/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py,第1613行b提高_ToDatastoreError(err)
超时:数据存储操作超时或数据暂时不可用。

引发上述异常的函数如下所示:

  def store_printed_question(question_id,service,title):
def _store_TX():
entity = Question.get_by_key_name(key_names ='%s_ %s'%\
(question_id,service))
如果实体:
entity.counter = entity.counter + 1
entity.put()
else :
问题(key_name ='%s_%s'%(question_id,service),\
question_id,\
服务,\
标题,\
$ counter = 1).put()
db.run_in_transaction(_store_TX)

基本上, store_printed_question 函数检查是否先前打印了一个给定的问题,在这种情况下递增相关计数器。
该函数由 WebHandler 添加到延期工作人员使用预定义的 default 队列,你可能知道,它的吞吐速率为每秒五次任务调用。 p>

对于具有六个属性(两个索引)的实体,我认为使用 transactions 受延迟任务速率限制的限制,可以让我避免数据存储超时,但从日志看,这个错误仍然每天都显示出来。

我存储的这个计数器并不重要,所以我并不担心这些超时。无论如何,我很好奇为什么Google App Engine无法正常处理这项任务,即使以每秒5个任务的低速率进行处理,如果降低速度可能是一个可行的解决方案。

A 分片计数器,以避免超时对我来说是一种矫枉过正。



编辑:

我在默认队列上设置了每秒1个任务的速率限制;我仍然收到相同的错误。

解决方案

查询只能存活30秒。查看我对的答案这个问题,一些示例代码使用游标来分解查询。


This is a common exception I'm getting on my application's log daily, usually 5/6 times a day with a traffic of 1K visits/day:

db error trying to store stats
Traceback (most recent call last):
  File "/base/data/home/apps/stackprinter/1b.347728306076327132/app/utility/worker.py", line 36, in deferred_store_print_statistics
    dbcounter.increment()
  File "/base/data/home/apps/stackprinter/1b.347728306076327132/app/db/counter.py", line 28, in increment
    db.run_in_transaction(txn)
  File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 1981, in RunInTransaction
    DEFAULT_TRANSACTION_RETRIES, function, *args, **kwargs)
  File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 2067, in RunInTransactionCustomRetries
    ok, result = _DoOneTry(new_connection, function, args, kwargs)
  File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 2105, in _DoOneTry
    if new_connection.commit():
  File "/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1585, in commit
    return rpc.get_result()
  File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 530, in get_result
    return self.__get_result_hook(self)
  File "/base/python_runtime/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1613, in __commit_hook
    raise _ToDatastoreError(err)
Timeout: The datastore operation timed out, or the data was temporarily unavailable.

The function that is raising the exception above is the following one:

def store_printed_question(question_id, service, title):
    def _store_TX():
        entity = Question.get_by_key_name(key_names = '%s_%s' % \
                                         (question_id, service ) )
        if entity:
            entity.counter = entity.counter + 1                
            entity.put()
        else:
            Question(key_name = '%s_%s' % (question_id, service ),\ 
                          question_id ,\
                          service,\ 
                          title,\ 
                          counter = 1).put()
    db.run_in_transaction(_store_TX)

Basically, the store_printed_question function check if a given question was previously printed, incrementing in that case the related counter in a single transaction.
This function is added by a WebHandler to a deferred worker using the predefined default queue that, as you might know, has a throughput rate of five task invocations per second.

On a entity with six attributes (two indexes) I thought that using transactions regulated by a deferred task rate limit would allow me to avoid datastore timeouts but, looking at the log, this error is still showing up daily.

This counter I'm storing is not so much important, so I'm not worried about getting these timeouts; anyway I'm curious why Google App Engine can't handle this task properly even at a low rate like 5 tasks per second and if lowering the rate could be a possible solution.
A sharded counter on each question to avoid timeouts would be an overkill to me.

EDIT:
I have set the rate limit to 1 task per second on the default queue; I'm still getting the same error.

解决方案

A query can only live for 30 seconds. See my answer to this question for some sample code to break a query up using cursors.

这篇关于Google App Engine超时:数据存储操作超时,或数据暂时不可用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆