异步的URLFetch当我们不关心结果呢? [蟒蛇] [英] Asynchronous URLfetch when we don't care about the result? [Python]

查看:218
本文介绍了异步的URLFetch当我们不关心结果呢? [蟒蛇]的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在一些code我正在为GAE我需要在另一个系统上的URL定期执行GET,本质'侦测'它,我不是在请求失败非常关注,超时或成功。

In some code I'm writing for GAE I need to periodically perform a GET on a URL on another system, in essence 'pinging' it and I'm not terribly concerned if the request fails, times out or succeeds.

当我基本上要射后不理而不是等待请求放慢自己的code,我使用的是异步的网址抓取,而不是要求get_result()。

As I basically want to 'fire and forget' and not slow down my own code by waiting for the request, I'm using an asynchronous urlfetch, and not calling get_result().

在我的日志,我得到一个警告:

In my log I get a warning:

找到1 RPC请求(S),而(由于超时或其他错误psumably $ P $)匹配响应

我缺少一个明显的更好的方式来做到这一点?任务队列或延迟任务看来(我)像在这种情况下矫枉过正。

Am I missing an obviously better way to do this? A Task Queue or Deferred Task seems (to me) like overkill in this instance.

任何投入将AP preciated。

Any input would appreciated.

推荐答案

一个任务队列的任务​​是这里最好的选择。你在日志中看到的消息表示该请求正在等待您的URLFetch返回之前完成,所以这并不能帮助。你说一个任务是矫枉过正,但实际上,他们是非常轻巧,绝对最好的方式做到这一点。递延甚至会允许你只是推迟进行提取直接调用,而不是写一个函数来调用。

A task queue task is your best option here. The message you're seeing in the log indicates that the request is waiting for your URLFetch to complete before returning, so this doesn't help. You say a task is 'overkill', but really, they're very lightweight, and definitely the best way to do this. Deferred will even allow you to just defer the fetch call directly, rather than having to write a function to call.

这篇关于异步的URLFetch当我们不关心结果呢? [蟒蛇]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆