Scrapy 遇到 DEBUG: Crawled (400) [英] Scrapy encounters DEBUG: Crawled (400)

查看:31
本文介绍了Scrapy 遇到 DEBUG: Crawled (400)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 Scrapy 抓取页面https://zhuanlan.zhihu.com/wangzhenotes".

I'm trying to scrape the page 'https://zhuanlan.zhihu.com/wangzhenotes' with Scrapy.

我运行这个命令

scrapy shell 'https://zhuanlan.zhihu.com/wangzhenotes'

得到

DEBUG: Crawled (400) https://zhuanlan.zhihu.com/wangzhenotes>(推荐人:无)

DEBUG: Crawled (400) <GET https://zhuanlan.zhihu.com/wangzhenotes> (referer: None)

我想我遇到了某种反刮.我如何知道该网站使用了哪些技术?

I guess I'm encountering some kind of anti-Scraping. How do I know what techniques the site is using?

这里是完整的日志

(base) $ scrapy shell 'https://zhuanlan.zhihu.com/wangzhenotes'
2020-07-01 09:46:03 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: scrapybot)
2020-07-01 09:46:03 [scrapy.utils.log] INFO: Versions: lxml 4.5.1.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.7.7 (default, May  6 2020, 04:59:01) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 2.9.2, Platform Darwin-17.7.0-x86_64-i386-64bit
2020-07-01 09:46:03 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-07-01 09:46:03 [scrapy.crawler] INFO: Overridden settings:
{'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
 'LOGSTATS_INTERVAL': 0}
2020-07-01 09:46:03 [scrapy.extensions.telnet] INFO: Telnet Password: 32acb90e56ac4d67
2020-07-01 09:46:03 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage']
2020-07-01 09:46:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-01 09:46:03 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-01 09:46:03 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-01 09:46:03 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-07-01 09:46:03 [scrapy.core.engine] INFO: Spider opened
2020-07-01 09:46:10 [scrapy.core.engine] DEBUG: Crawled (400) <GET https://zhuanlan.zhihu.com/wangzhenotes> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x10ba0a090>
[s]   item       {}
[s]   request    <GET https://zhuanlan.zhihu.com/wangzhenotes>
[s]   response   <400 https://zhuanlan.zhihu.com/wangzhenotes>
[s]   settings   <scrapy.settings.Settings object at 0x10ba0a2d0>
[s]   spider     <DefaultSpider 'default' at 0x10bf4e210>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser

将此添加到 settings.py 后

After adding this to settings.py

DEFAULT_REQUEST_HEADERS = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'}

日志转向

2020-07-01 11:43:37 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://zhuanlan.zhihu.com/robots.txt> (referer: None)

...

2020-07-01 11:43:37 [protego] DEBUG: Rule at line 19 without any user agent to enforce it on.

...

2020-07-01 11:43:38 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://zhuanlan.zhihu.com/wangzhenotes> (referer: None)

推荐答案

将此中间线添加到 middleware.py 文件 -

Add this middlewire to the middleware.py file -

class CustomMiddleware(object):
    def process_request(self, request, spider):
        request.headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36"

然后用新的中间件替换所有以前的中间件,就像这样.

then replace all the previous middlewares with the new one, like this.

DOWNLOADER_MIDDLEWARES = {
    'projectname.middlewares.CustomMiddleware': 543,
}

不再需要这个 -

DEFAULT_REQUEST_HEADERS = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'}

这篇关于Scrapy 遇到 DEBUG: Crawled (400)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆