通过网络抓取图像并使用Python保存 [英] Grab an image via the web and save it with Python

查看:67
本文介绍了通过网络抓取图像并使用Python保存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望能够下载图像(将其调整为我的计算机或Web服务器),然后将其上载到S3.这里涉及的部分是:

I want to be able to download an image (to my computer or to a web server) resize it, and upload it to S3. The piece concerned here is:

在Python中完成下载部分的推荐方法是什么(即,不想使用外部工具,bash等).我希望将其存储到内存中直到完成(与将映像下载到本地驱动器,然后再使用它相比).非常感谢您的帮助.

What would be a recommended way to do the downloading portion within Python (i.e., don't want to use external tools, bash, etc). I want it to be stored into memory until it's done with (versus downloading the image to a local drive, and then working with it). Any help is much appreciated.

推荐答案

urllib (简单但有点粗糙)和 urllib2 (功能强大,但有点复杂)是推荐的标准库模块,用于从URL(到内存或磁盘)中获取数据.对于足够简单的需求,x=urllib.urlopen(theurl)将为您提供一个对象,该对象使您可以访问响应标头(例如,查找图像的内容类型)和数据(如x.read()); urllib2的工作原理类似,但比起简单的urllib,您可以控制代理,用户代理,coockies,https,身份验证等.

urllib (simple but a bit rough) and urllib2 (powerful but a bit more complicated) are the recommended standard library modules for grabbing data from a URL (either to memory or to disk). For simple-enough needs, x=urllib.urlopen(theurl) will give you an object that lets you access the response headers (e.g. to find out the image's content-type) and data (as x.read()); urllib2 works similarly but lets you control proxying, user agent, coockies, https, authentication, etc, etc, much more than simple urllib does.

这篇关于通过网络抓取图像并使用Python保存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆