Scrapy Shell有效,但实际脚本返回404错误 [英] Scrapy shell works but actual script returns 404 error

查看:77
本文介绍了Scrapy Shell有效,但实际脚本返回404错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

scrapy shell http://www.zara.com/us

返回正确的200码

2017-01-05 18:34:20 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: zara)
2017-01-05 18:34:20 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'zara.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['zara.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'zara', 'LOGSTATS_INTERVAL': 0, 'USER_AGENT': 'zara (+http://www.yourdomain.com)'}
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware']
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-05 18:34:20 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-05 18:34:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-05 18:34:20 [scrapy.core.engine] INFO: Spider opened
2017-01-05 18:34:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.zara.com/robots.txt> (referer: None) ['cached']
2017-01-05 18:34:20 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.zara.com/us/> from <GET http://www.zara.com/us>
2017-01-05 18:34:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.zara.com/us/> (referer: None) ['cached']

但是,当我尝试输入www.zara.com/us时,我的实际py脚本导致404错误.如果我使用wwww.zara.com,则页面返回200,但是当我尝试使用特定国家/地区的网站时,页面返回404 ...

But my actual py script causes a 404 error when I try to enter www.zara.com/us. If I use wwww.zara.com, the page returns a 200 but when I try to do it with the country specific site, it returns a 404...

class ZaraSpider(scrapy.Spider):

    name = "zara-us"
    allowed_domain = ['www.zara.com/us']
    start_urls = [
        "http://www.zara.com/us"
    ]
    handle_httpstatus_list = [404]

    # navigating main page
    def parse(self, response):

        # get 1st 2 category listing in navigation sidebar
        categories = response.xpath('//*[@id="menu"]/ul/li')
        collections = categories[0].xpath('a//text()').extract()
        yield ProductItem(collection=collections[0])

在终端输入: scrapy crawl zara-us

2017-01-05 18:45:24 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: zara)
2017-01-05 18:45:24 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'zara.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['zara.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'zara', 'USER_AGENT': 'zara (+http://www.yourdomain.com)'}
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware']
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-05 18:45:24 [scrapy.core.engine] INFO: Spider opened
2017-01-05 18:45:24 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-05 18:45:24 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-05 18:45:25 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.zara.com/robots.txt> (referer: None) ['cached']
2017-01-05 18:45:25 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.zara.com/us> (referer: None) ['cached']
2017-01-05 18:45:25 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.zara.com/us> (referer: None)
`

推荐答案

默认情况下,对于每个新项目,Scrapy将 ROBOTS_TXT_OBEY 设置为True,这意味着在您的蜘蛛抓取它检查网站之前的任何内容之前,> robots.txt 文件,以了解允许和禁止抓取的内容.

Scrapy by default for every new projects turns on ROBOTS_TXT_OBEY to True, which means before your spider can scrape anything it checks websites robots.txt file for what is allowed and disallowed to be scraped.

要禁用此功能,只需从 settings.py 文件中删除设置 ROBOTS_TXT_OBEY .

To disable this simply delete the setting ROBOTS_TXT_OBEY from settings.py file.

此处

这篇关于Scrapy Shell有效,但实际脚本返回404错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆