python请求缓慢 [英] python requests is slow
问题描述
我的代码检查链接如下:
url ='http://pyscripter.googlecode.com/files/ PyScripter-v2.5.3-Setup.exe'
r = requests.get(url,allow_redirects = False)#this行需要40秒
如果r.status_code == 200:
print链接有效的
else:
print链接无效
这是需要大约40秒来执行这个检查,这是巨大的。
我的问题是我如何加速这可能使用urllib2或某些东西?
注意:如果我替换 url
,其实际网址为 http:// pyscripter .googlecode.com / files / PyScripter-v2.5.3-Setup.exe ',这需要一秒钟,因此它似乎是请求的一个问题。
不是所有的主机都支持头
请求。你可以使用它:
r = requests.get(url,stream = True)
这实际上只下载标题,而不是响应内容。此外,如果这个想法是以后获取文件,那么你不需要另外提出一个请求。
请参阅 here 获取更多信息。
I am developing a download manager. Using the requests module in python to check for a valid link (and hopefully broken links). My code for checking link below:
url='http://pyscripter.googlecode.com/files/PyScripter-v2.5.3-Setup.exe'
r = requests.get(url,allow_redirects=False) #this line takes 40 seconds
if r.status_code==200:
print "link valid"
else:
print "link invalid"
Now, the issue is this takes approximately 40 seconds to perform this check, which is huge. My question is how can I speed this up maybe using urllib2 or something??
Note: Also if I replace url
with the actual URL which is 'http://pyscripter.googlecode.com/files/PyScripter-v2.5.3-Setup.exe', this takes one second so it appears to be an issue with requests.
Not all hosts support head
requests. You can use this instead:
r = requests.get(url, stream=True)
This actually only download the headers, not the response content. Moreover, if the idea is to get the file afterwards, you don't have to make another request.
See here for more infos.
这篇关于python请求缓慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!