Python 3.5 urllib.request 403禁止错误 [英] Python 3.5 urllib.request 403 Forbidden Error

查看:276
本文介绍了Python 3.5 urllib.request 403禁止错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

import urllib.request
import urllib
from bs4 import BeautifulSoup


url = "https://www.brightscope.com/ratings"
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page, "html.parser")

print(soup.title)

我正试图去上述站点,并且代码不断吐出403禁止错误.

I was trying to go to the above site and the code keeps spitting out a 403 Forbidden Error.

有什么想法吗?

C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ python.exe"C:/Users/jerem/PycharmProjects/webscraper/url scraper.py" 追溯(最近一次通话): 在第7行的文件"C:/Users/jerem/PycharmProjects/webscraper/url scraper.py"中 页面= urllib.request.urlopen(url) urlopen中的文件"C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ lib \ urllib \ request.py",第163行 返回opener.open(URL,数据,超时) 打开的文件"C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ lib \ urllib \ request.py",第472行 响应= meth(req,响应) 文件"C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ lib \ urllib \ request.py",第582行,位于http_response中 'http',请求,响应,代码,msg,hdr) 文件"C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ lib \ urllib \ request.py",第510行,错误 返回self._call_chain(* args) _call_chain中的文件"C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ lib \ urllib \ request.py",行444 结果= func(* args) http://error_default中的文件"C:\ Users \ jerem \ AppData \ Local \ Programs \ Python \ Python35-32 \ lib \ urllib \ request.py",第590行 引发HTTPError(req.full_url,code,msg,hdrs,fp) urllib.error.HTTPError:HTTP错误403:禁止

C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\python.exe "C:/Users/jerem/PycharmProjects/webscraper/url scraper.py" Traceback (most recent call last): File "C:/Users/jerem/PycharmProjects/webscraper/url scraper.py", line 7, in page = urllib.request.urlopen(url) File "C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 163, in urlopen return opener.open(url, data, timeout) File "C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 472, in open response = meth(req, response) File "C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 582, in http_response 'http', request, response, code, msg, hdrs) File "C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 510, in error return self._call_chain(*args) File "C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 444, in _call_chain result = func(*args) File "C:\Users\jerem\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 590, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden

推荐答案

import requests
from bs4 import BeautifulSoup


url = "https://www.brightscope.com/ratings"
headers = {'User-Agent':'Mozilla/5.0'}
page = requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")

print(soup.title)

退出:

<title>BrightScope Ratings</title>

首先,使用requests而不是urllib.

然后,将headers添加到requests,否则,该站点将禁止您访问,因为默认的User-Agent是搜寻器,该站点不喜欢它.

Than, add headers to requests, if not, the site will ban your, because the default User-Agent is crawler, which the site do not like.

这篇关于Python 3.5 urllib.request 403禁止错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆