为什么我用 urllib2 得到 urllib2.HTTPError 而用 urllib 没有错误? [英] Why I get urllib2.HTTPError with urllib2 and no errors with urllib?
问题描述
我有以下简单的代码:
import urllib2
import sys
sys.path.append('../BeautifulSoup/BeautifulSoup-3.1.0.1')
from BeautifulSoup import *
page='http://en.wikipedia.org/wiki/Main_Page'
c=urllib2.urlopen(page)
此代码生成以下错误消息:
This code generates the following error messages:
c=urllib2.urlopen(page)
File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen
return _opener.open(url, data)
File "/usr/lib64/python2.4/urllib2.py", line 364, in open
response = meth(req, response)
File "/usr/lib64/python2.4/urllib2.py", line 471, in http_response
response = self.parent.error(
File "/usr/lib64/python2.4/urllib2.py", line 402, in error
return self._call_chain(*args)
File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain
result = func(*args)
File "/usr/lib64/python2.4/urllib2.py", line 480, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
但是如果我将 urllib2 替换为 urllib,则不会收到任何错误消息.有人可以解释这种行为吗?
But if I replace urllib2 by urllib, I get no error messages. Can anybody explain this behavior?
推荐答案
原始的 urllib
根本不会在 403 代码上引发异常.如果将 print c.getcode()
添加到程序的最后一行,urllib
将到达它并仍然打印出 403.
The original urllib
simply does not raise an exception on a 403 code. If you add print c.getcode()
to the last line of your program, urllib
will reach it and still print out 403.
然后如果你在最后执行print c.read()
,你会看到你确实从维基百科得到了一个错误页面.这只是 urllib2
决定将错误 403 视为运行时异常的问题,而 urllib
允许您仍然收到错误 403,然后对页面执行某些操作.
Then if you do print c.read()
at the end, you will see that you did indeed get an error page from Wikipedia. It's just a matter of urllib2
deciding to treat an error 403 as a runtime exception, versus urllib
allowing you to still get an error 403 and then do something with the page.
这篇关于为什么我用 urllib2 得到 urllib2.HTTPError 而用 urllib 没有错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!