与urllib2的或其他HTTP库多(异步)连接? [英] Multiple (asynchronous) connections with urllib2 or other http library?

查看:119
本文介绍了与urllib2的或其他HTTP库多(异步)连接?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有code这样的。

for p in range(1,1000):
    result = False
    while result is False:
        ret = urllib2.Request('http://server/?'+str(p))
        try:
            result = process(urllib2.urlopen(ret).read())
        except (urllib2.HTTPError, urllib2.URLError):
            pass
    results.append(result)

我想提出两个或三个请求,同时加速这一点。我可以使用urllib2的这个,怎么样?如果没有这些其他图书馆,我应该使用?谢谢你。

I would like to make two or three request at the same time to accelerate this. Can I use urllib2 for this, and how? If not which other library should I use? Thanks.

推荐答案

看看 GEVENT - 一个基于协同程序,Python的网络使用greenlet库提供关于libevent的事件循环的顶部的高级别同步API

Take a look at gevent — a coroutine-based Python networking library that uses greenlet to provide a high-level synchronous API on top of libevent event loop.

例如:

#!/usr/bin/python
# Copyright (c) 2009 Denis Bilenko. See LICENSE for details.

"""Spawn multiple workers and wait for them to complete"""

urls = ['http://www.google.com', 'http://www.yandex.ru', 'http://www.python.org']

import gevent
from gevent import monkey

# patches stdlib (including socket and ssl modules) to cooperate with other greenlets
monkey.patch_all()

import urllib2


def print_head(url):
    print 'Starting %s' % url
    data = urllib2.urlopen(url).read()
    print '%s: %s bytes: %r' % (url, len(data), data[:50])

jobs = [gevent.spawn(print_head, url) for url in urls]

gevent.joinall(jobs)

这篇关于与urllib2的或其他HTTP库多(异步)连接?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆