检查 `urllib.urlretrieve(url, file_name)` 完成状态 [英] Check for `urllib.urlretrieve(url, file_name)` Completion Status

查看:26
本文介绍了检查 `urllib.urlretrieve(url, file_name)` 完成状态的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何在允许我的程序前进到下一条语句之前检查 urllib.urlretrieve(url, file_name) 是否已完成?

How do I check to see if urllib.urlretrieve(url, file_name) has completed before allowing my program to advance to the next statement?

以下面的代码片段为例:

Take for example the following code snippet:

import traceback
import sys
import Image
from urllib import urlretrieve

try:
        print "Downloading gif....."
        urlretrieve(imgUrl, "tides.gif")
        # Allow time for image to download/save:
        time.sleep(5)
        print "Gif Downloaded."
    except:
        print "Failed to Download new GIF"
        raw_input('Press Enter to exit...')
        sys.exit()

    try:
        print "Converting GIF to JPG...."
        Image.open("tides.gif").convert('RGB').save("tides.jpg")
        print "Image Converted"
    except Exception, e:
        print "Conversion FAIL:", sys.exc_info()[0]
        traceback.print_exc()
        pass

当通过 urlretrieve(imgUrl, "tides.gif") 下载 'tides.gif' 花费的时间超过 time.sleep(seconds) 导致空或不完整的文件,Image.open("tides.gif") 引发 IOError(由于 0 kB 大小的tides.gif 文件).

When the download of 'tides.gif' via urlretrieve(imgUrl, "tides.gif") takes longer than time.sleep(seconds) resulting in an empty or not-complete file, Image.open("tides.gif") raises an IOError (due to a tides.gif file of size 0 kB).

如何查看urlretrieve(imgUrl, "tides.gif")的状态,让我的程序只有在语句成功完成后才能前进?

How can I check the status of urlretrieve(imgUrl, "tides.gif"), allowing my program to advance only after the statement has been successfully completed?

推荐答案

Requests 比 urllib 更好,但您应该能够这样做以同步下载文件:

Requests is nicer than urllib but you should be able to do this to synchronously download the file:

import urllib
f = urllib.urlopen(imgUrl)
with open("tides.gif", "wb") as imgFile:
    imgFile.write(f.read())
# you won't get to this print until you've downloaded
# all of the image at imgUrl or an exception is raised
print "Got it!"

这样做的缺点是需要在内存中缓冲整个文件,因此如果您一次下载大量图像,最终可能会使用大量内存.这不太可能,但仍然值得了解.

The downside of this is it will need to buffer the whole file in memory so if you're downloading a lot of images at once you may end up using a ton of ram. It's unlikely, but still worth knowing.

这篇关于检查 `urllib.urlretrieve(url, file_name)` 完成状态的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆