的Python:网址内容简单的异步下载? [英] Python: simple async download of url content?
问题描述
我有响应各种用户请求一个web.py服务器。一这些请求需要下载和分析的一系列网页。
I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages.
有一个简单的方法来设置在web.py异步/基于回调URL下载机制?低资源使用情况是特别重要的,因为每个用户发起请求可能导致多个网页下载。
Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.
该流程是这样的:
用户请求 - > web.py - >下载并行10页或异步 - >分析内容,返回结果
User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results
我承认,扭转将是一个很好的方式做到这一点,但我已经在web.py所以我的东西,可以适合web.py内特别感兴趣。
I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .
推荐答案
使用它使用asynchat和asyncore异步HTTP客户端。
<一href=\"http://sourceforge.net/projects/asynchttp/files/asynchttp-production/asynchttp.py-1.0/asynchttp.py/download\" rel=\"nofollow\">http://sourceforge.net/projects/asynchttp/files/asynchttp-production/asynchttp.py-1.0/asynchttp.py/download
Use the async http client which uses asynchat and asyncore. http://sourceforge.net/projects/asynchttp/files/asynchttp-production/asynchttp.py-1.0/asynchttp.py/download
这篇关于的Python:网址内容简单的异步下载?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!