代理背后的Python urllib urlretrieve [英] Python urllib urlretrieve behind proxy

查看:45
本文介绍了代理背后的Python urllib urlretrieve的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我查看了 urllib 的文档,但我在代理上找到的所有内容都与 urlopen 有关.但是,我想从给定的 URL 下载 PDF 并将其存储在本地,但使用某个代理服务器.到目前为止,我的方法不起作用:

I looked into the documentation of urllib but all I could find on proxies was related to urlopen. However, I want to download a PDF from a given URL and store it locally but using a certain proxy server. My approach so far which did not work:

import urllib2

proxies = {'http': 'http://123.96.220.2:81'}
opener = urllib2.FancyURLopener(proxies)
download = opener.urlretrieve(URL, file_name)

错误是AttributeError: FancyURLopener instance has no attribute 'urlretrieve'

推荐答案

我相信你可以这样做:

import urllib2

proxy = urllib2.ProxyHandler({'http': '123.96.220.2:81'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)

with open('filename','wb') as f:
    f.write(urllib2.urlopen(URL).read())
    f.close()

因为 urllib2 没有 urlretrieve你可以使用 urlopen 来获得同样的效果

since urllib2 doesnt have urlretrieve you can just use urlopen to get the same effect

你一定把文档搞糊涂了,因为 urllib2 也没有 FancyURLopener 这就是你收到错误的原因

you must have got the docs confused becuase urllib2 also doesnt have FancyURLopener thats why youre getting the error

urllib2 在处理代理等时要好得多

urllib2 is much better when handling proxies and such

有关更多信息,请查看此处Urllib2 文档

for more info look here Urllib2 Docs

这篇关于代理背后的Python urllib urlretrieve的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆