Python URL检索限制率并恢复部分下载 [英] Python URLRetrieve Limit Rate and Resume Partial Download
本文介绍了Python URL检索限制率并恢复部分下载的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在使用此线程中的代码来限制我的下载速度.
I'm using the code from this thread to limit my download rate.
如何将部分下载恢复为限速代码?我发现的示例使用urlopen
而不是urlretrieve
,并且RateLimit
类取决于urlretrieve
.
How do I incorporate partial downloads resuming with the rate limiting code? The examples I've found use urlopen
instead of urlretrieve
, and the RateLimit
class depends on urlretrieve
.
我想要一个外部函数来控制部分下载,而不必更改RateLimit
类:
I'd like to have an external function that controls the partial downloading, without having to change the RateLimit
class:
from throttle import TokenBucket, RateLimit
def retrieve_limit_rate(url, filename, rate_limit):
"""Fetch the contents of urls"""
bucket = TokenBucket(10*rate_limit, rate_limit)
print "rate limit = %.1f kB/s" % (rate_limit,)
print 'Downloading %s...' % filename
rate_limiter = RateLimit(bucket, filename)
#
# What do I put here to allow resuming files?
#
return urllib.urlretrieve(url, filename, rate_limiter)
推荐答案
也许可以改用PyCurl:
May be able to use PyCurl instead:
def curl_progress(total, existing, upload_t, upload_d):
try:
frac = float(existing)/float(total)
except:
frac = 0
print "Downloaded %d/%d (%0.2f%%)" % (existing, total, frac)
def curl_limit_rate(url, filename, rate_limit):
"""Rate limit in bytes"""
import pycurl
c = pycurl.Curl()
c.setopt(c.URL, url)
c.setopt(c.MAX_RECV_SPEED_LARGE, rate_limit)
if os.path.exists(filename):
file_id = open(filename, "ab")
c.setopt(c.RESUME_FROM, os.path.getsize(filename))
else:
file_id = open(filename, "wb")
c.setopt(c.WRITEDATA, file_id)
c.setopt(c.NOPROGRESS, 0)
c.setopt(c.PROGRESSFUNCTION, curl_progress)
c.perform()
这篇关于Python URL检索限制率并恢复部分下载的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文