启动,停止并继续Google App Engine BulkLoader [英] Starting, Stopping, and Continuing the Google App Engine BulkLoader

查看:138
本文介绍了启动,停止并继续Google App Engine BulkLoader的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有很多数据可以上传到Google App Engine中。我想使用批量加载程序来帮助它在那里。然而,我有这么多的数据,我通常会在完成之前用完我的CPU配额。此外,任何其他问题,如互联网连接不良或随机电脑问题都可能会阻止该进程。

I have quite of bit of data that I will be uploading into Google App Engine. I want to use the bulkloader to help get it in there. However, I have so much data that I generally use up my CPU quota before it's done. Also, any other problem such a bad internet connection or random computer issue can stop the process.

有没有办法从您离开的地方继续执行bulkload?或只是批量加载尚未写入数据存储区的数据?

Is there any way to continue a bulkload from where you left off? Or to only bulkload data that has not been written to the datastore?

我无法在文档中找到任何内容,所以我假设任何答案都会包括挖入代码。

I couldn't find anything in the docs, so I assume any answer will include digging into the code.

推荐答案

好吧,它在文档中:

Well, it is in the docs:


如果传输中断,您
可以使用--db_filename = ...
参数从其
中止的位置恢复传输。该值是工具创建的
进度文件的名称,
可以是您在
开始传输时提供
的名称和--db_filename参数,或者包含时间戳的默认
名称。这个
假定你已经安装了sqlite3,
,并且没有使用--db_filename = skip来禁用进度文件

If the transfer is interrupted, you can resume the transfer from where it left off using the --db_filename=... argument. The value is the name of the progress file created by the tool, which is either a name you provided with the --db_filename argument when you started the transfer, or a default name that includes a timestamp. This assumes you have sqlite3 installed, and did not disable the progress file with --db_filename=skip.

http://code.google.com/appengine /docs/python/tools/uploadingdata.html

(前段时间我用过它,所以我觉得它应该在那里)

(I've used it some time ago, so I had a feeling it should be there)

这篇关于启动,停止并继续Google App Engine BulkLoader的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆