带有多线程的Python请求 [英] Python requests with multithreading
问题描述
过去两天,我一直在尝试构建具有多线程功能的刮板.不知何故我仍然无法管理它.最初,我尝试使用带有线程模块的常规多线程方法,但是它并不比使用单个线程快.后来我了解到请求正在阻塞,并且多线程方法并没有真正起作用.因此,我不断研究并发现有关grequests和gevent的知识.现在,我正在使用gevent运行测试,它仍然没有比使用单个线程快.我的编码有误吗?
I've been trying to build a scraper with multithreading functionality past two days. Somehow I still couldn't manage it. At first I tried regular multithreading approach with threading module but it wasn't faster than using a single thread. Later I learnt that requests is blocking and multithreading approach isn't really working. So I kept researching and found out about grequests and gevent. Now I'm running tests with gevent and it's still not faster than using a single thread. Is my coding wrong?
这是我课程的相关部分:
Here is the relevant part of my class:
import gevent.monkey
from gevent.pool import Pool
import requests
gevent.monkey.patch_all()
class Test:
def __init__(self):
self.session = requests.Session()
self.pool = Pool(20)
self.urls = [...urls...]
def fetch(self, url):
try:
response = self.session.get(url, headers=self.headers)
except:
self.logger.error('Problem: ', id, exc_info=True)
self.doSomething(response)
def async(self):
for url in self.urls:
self.pool.spawn( self.fetch, url )
self.pool.join()
test = Test()
test.async()
推荐答案
安装与grequests
模块 >(requests
不是为异步设计的):
Install the grequests
module which works with gevent
(requests
is not designed for async):
pip install grequests
然后将代码更改为以下内容:
Then change the code to something like this:
import grequests
class Test:
def __init__(self):
self.urls = [
'http://www.example.com',
'http://www.google.com',
'http://www.yahoo.com',
'http://www.stackoverflow.com/',
'http://www.reddit.com/'
]
def exception(self, request, exception):
print "Problem: {}: {}".format(request.url, exception)
def async(self):
results = grequests.map((grequests.get(u) for u in self.urls), exception_handler=self.exception, size=5)
print results
test = Test()
test.async()
这是requests
项目的官方推荐:
是阻止还是不阻止?
在使用默认传输适配器的情况下,请求不提供任何类型的非阻塞IO. Response.content
属性将阻塞,直到下载完整个响应为止.如果需要更高的粒度,请使用库的流功能(请参见流请求)允许您一次检索较小数量的响应.但是,这些调用仍会阻止.
With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content
property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.
如果您担心使用阻塞IO,那么有很多项目将Requests与Python的异步框架之一结合在一起.两个很好的例子是 grequests
和 requests-futures
.
If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python's asynchronicity frameworks. Two excellent examples are grequests
and requests-futures
.
使用此方法可以使10个URL的性能得到显着提高:使用原始方法的0.877s
vs 3.852s
.
Using this method gives me a noticable performance increase with 10 URLs: 0.877s
vs 3.852s
with your original method.
这篇关于带有多线程的Python请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!