具有多线程的 Python 请求 [英] Python requests with multithreading

查看:36
本文介绍了具有多线程的 Python 请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

过去两天我一直在尝试构建具有多线程功能的抓取工具.不知怎的,我还是无法应付.起初我尝试使用线程模块进行常规多线程方法,但它并不比使用单线程快.后来我了解到请求是阻塞的,多线程方法并没有真正起作用.所以我一直在研究并发现了 grequests 和 gevent.现在我正在使用 gevent 运行测试,它仍然不比使用单线程快.我的编码有问题吗?

I've been trying to build a scraper with multithreading functionality past two days. Somehow I still couldn't manage it. At first I tried regular multithreading approach with threading module but it wasn't faster than using a single thread. Later I learnt that requests is blocking and multithreading approach isn't really working. So I kept researching and found out about grequests and gevent. Now I'm running tests with gevent and it's still not faster than using a single thread. Is my coding wrong?

这是我班级的相关部分:

Here is the relevant part of my class:

import gevent.monkey
from gevent.pool import Pool
import requests

gevent.monkey.patch_all()

class Test:
    def __init__(self):
        self.session = requests.Session()
        self.pool = Pool(20)
        self.urls = [...urls...]

    def fetch(self, url):

        try:
            response = self.session.get(url, headers=self.headers)
        except:
            self.logger.error('Problem: ', id, exc_info=True)

        self.doSomething(response)

    def async(self):
        for url in self.urls:
            self.pool.spawn( self.fetch, url )

        self.pool.join()

test = Test()
test.async()

推荐答案

安装 grequestsgevent 一起使用的模块(requests 不是为异步设计的):

Install the grequests module which works with gevent (requests is not designed for async):

pip install grequests

然后把代码改成这样:

import grequests

class Test:
    def __init__(self):
        self.urls = [
            'http://www.example.com',
            'http://www.google.com', 
            'http://www.yahoo.com',
            'http://www.stackoverflow.com/',
            'http://www.reddit.com/'
        ]

    def exception(self, request, exception):
        print "Problem: {}: {}".format(request.url, exception)

    def async(self):
        results = grequests.map((grequests.get(u) for u in self.urls), exception_handler=self.exception, size=5)
        print results

test = Test()
test.async()

这是官方推荐的<代码>请求项目:

阻塞还是非阻塞?

使用默认的传输适配器,请求不提供任何类型的非阻塞 IO.Response.content 属性将一直阻塞,直到整个响应被下载.如果您需要更多粒度,库的流功能(请参阅 流式请求) 允许您一次检索较小数量的响应.但是,这些调用仍会阻塞.

With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.

如果您担心阻塞 IO 的使用,有很多项目将请求与 Python 的异步框架之一相结合.两个很好的例子是 grequestsrequests-futures.

If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python's asynchronicity frameworks. Two excellent examples are grequests and requests-futures.

使用此方法可显着提高 10 个 URL 的性能:0.877s 与使用原始方法的 3.852s.

Using this method gives me a noticable performance increase with 10 URLs: 0.877s vs 3.852s with your original method.

这篇关于具有多线程的 Python 请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆