如何在Python 3中处理urllib的超时? [英] How to handle urllib's timeout in Python 3?
问题描述
这不会在URLError下?
try:
response = urllib.request.urlopen(url,timeout = 10).read()。decode('utf-8')
except(HTTPError,URLError)as error:
logging.error(
'%s的数据未被检索,因为%s\\\
URL:%s',name,error,url)
else:
logging.info('Access successful。')
错误消息:
resp = urllib.request.urlopen(req,timeout = 10).read()。decode('utf-8' )
文件/usr/lib/python3.2/urllib/request.py,第138行,urlopen
return opener.open(url,data,timeout)< br>
文件/usr/lib/python3.2/urllib/request.py,第369行,打开
response = self._open(req,data)
文件/usr/lib/python3.2/urllib/request.py,第387行,_open
'_o pen',req)
文件/usr/lib/python3.2/urllib/request.py,第347行,_call_chain
result = func(* args)< br>
文件/usr/lib/python3.2/urllib/request.py,第1156行,在http_open中
return self.do_open(http.client.HTTPConnection,req)< br>
文件/usr/lib/python3.2/urllib/request.py,第1141行,do_open
r = h.getresponse()
文件/usr/lib/python3.2/http/client.py,第1046行,getresponse
response.begin()
文件/ usr / lib / python3 2 / http / client.py,第346行,开始
版本,status,reason = self._read_status()
文件/usr/lib/python3.2/ http / client.py,第308行,在_read_status中
line = str(self.fp.readline(_MAXLINE + 1),iso-8859-1)
文件//// / p>
当他们重新组织 urllib
和 urllib2
modules into urllib
。
异常是从套接字超时,所以
从套接字导入超时
尝试:
response = urllib.request.urlopen(url,timeout = 10).read ().decode('utf-8')
除了(HTTPError,URLError)作为错误:
logging.error('%s的数据未检索,因为%s\\\
URL:%s' name,error,url)
except timeout:
logging.error('socket timed out - URL%s',url)
else:
logging.info('Access successful ')
应该捕捉新的异常。虽然我不知道是否回答了你的问题,因为我不知道你的问题是什么..
First off, my problem is quite similar to this one. I would like a timeout of urllib.urlopen() to generate an exception that I can handle.
Doesn't this fall under URLError?
try:
response = urllib.request.urlopen(url, timeout=10).read().decode('utf-8')
except (HTTPError, URLError) as error:
logging.error(
'Data of %s not retrieved because %s\nURL: %s', name, error, url)
else:
logging.info('Access successful.')
The error message:
resp = urllib.request.urlopen(req, timeout=10).read().decode('utf-8')
File "/usr/lib/python3.2/urllib/request.py", line 138, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.2/urllib/request.py", line 369, in open
response = self._open(req, data)
File "/usr/lib/python3.2/urllib/request.py", line 387, in _open
'_open', req)
File "/usr/lib/python3.2/urllib/request.py", line 347, in _call_chain
result = func(*args)
File "/usr/lib/python3.2/urllib/request.py", line 1156, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.2/urllib/request.py", line 1141, in do_open
r = h.getresponse()
File "/usr/lib/python3.2/http/client.py", line 1046, in getresponse
response.begin()
File "/usr/lib/python3.2/http/client.py", line 346, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.2/http/client.py", line 308, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.2/socket.py", line 276, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
There was a major change from in Python 3 when they re-organised the urllib
and urllib2
modules into urllib
. Is it possible that there was a change then that causes this?
The exception is timeout from socket, so
from socket import timeout
try:
response = urllib.request.urlopen(url, timeout=10).read().decode('utf-8')
except (HTTPError, URLError) as error:
logging.error('Data of %s not retrieved because %s\nURL: %s', name, error, url)
except timeout:
logging.error('socket timed out - URL %s', url)
else:
logging.info('Access successful.')
should catch the new exception. Though I'm not sure if that answers your question, as I'm not sure what your question is..
这篇关于如何在Python 3中处理urllib的超时?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!