Scrapy 爬虫蜘蛛不遵循链接 [英] Scrapy crawler spider doesn't follow links

查看:48
本文介绍了Scrapy 爬虫蜘蛛不遵循链接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

为此,我使用了 Scrapy 爬网蜘蛛示例中的示例:http://doc.scrapy.org/en/latest/topics/spiders.html

For this, I used example in Scrapy crawl spider example: http://doc.scrapy.org/en/latest/topics/spiders.html

我想从网页获取链接并跟踪它们以解析带有统计信息的表格,但不知何故我没有看到任何链接会被抓取并跟踪到包含数据的网页.这是我的脚本:

I want to get links from a web page and follow them to parse table with statistics, but somehow I don't see that any links would be grabbed and followed to web page that has data. Here is my script:

from basketbase.items import BasketbaseItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request



class Basketspider(CrawlSpider):
    name = "basketsp"
    allowed_domains = ["euroleague.net"]
    start_urls = ["http://www.euroleague.net/main"]
    rules = (
        Rule(SgmlLinkExtractor(allow=("results/by-date?seasoncode=E2000")),follow=True),
        Rule(SgmlLinkExtractor(allow=("showgame?gamecode=165&seasoncode=E2000#!boxscore")), callback='parse_item'),
    )


    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)
        sel = HtmlXPathSelector(response)
        items=[]
        item = BasketbaseItem()
        item['date'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Game date
        item['time'] = sel.select('//div[@class="gs-dates"]/span[@class="GameScoreTimeContainer"]/text()').extract() # Game time
        item['stage'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Stage of tournament
        item['home'] = sel.select('//div[@class="gs-teams"]/a[@class="localClub"]/text()').extract() #Home team
        item['guest'] = sel.select('//div[@class="gs-teams"]/a[@class="roadClub"]/text()').extract() #Visitor team
        item['referees'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() #Referees
        item['attendance'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblAudience"]/text()').extract()
        item['fst'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[2][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[2][@class="AlternatingColumn"]/text()').extract()
        item['snd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[3][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[3][@class="NormalColumn"]/text()').extract()
        item['trd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[4][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[4][@class="AlternatingColumn"]/text()').extract()
        item['tth'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[5][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[5][@class="NormalColumn"]/text()').extract()
        item['xt1'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt2'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt3'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt4'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['game_id'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() # Game ID construct
        item['arena'] = sel.select('//div[@class="gs-dates"]/text()').extract() #Arena
        item['result'] = sel.select('//span[@class="score"]/text()').extract() #Result
        item['league'] = sel.select('//div[@class="gs-dates"]/text()').extract() #League
        print item['date'],item['time'], item['stage'], item['home'],item['guest'],item['referees'],item['attendance'],item['fst'],item['snd'],item['trd'],item['tth'],item['result']
        items.append(item)    

这里我有终端的回复:

scrapy crawl basketsp
2013-11-17 01:40:15+0200 [scrapy] INFO: Scrapy 0.16.2 started (bot: basketbase)
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole,   CloseSpider, WebService, CoreStats, SpiderState
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled item pipelines: 
2013-11-17 01:40:15+0200 [basketsp] INFO: Spider opened
2013-11-17 01:40:15+0200 [basketsp] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-11-17 01:40:15+0200 [basketsp] DEBUG: Crawled (200) <GET http://www.euroleague.net/main> (referer: None)
2013-11-17 01:40:15+0200 [basketsp] INFO: Closing spider (finished)
2013-11-17 01:40:15+0200 [basketsp] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 228,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 9018,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 11, 16, 23, 40, 15, 496752),
     'log_count/DEBUG': 7,
     'log_count/INFO': 4,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2013, 11, 16, 23, 40, 15, 229125)}
2013-11-17 01:40:15+0200 [basketsp] INFO: Spider closed (finished)

我在做什么,错了吗?任何想法都会有很大帮助.我试图将 SgmlLinkExtractor() 留空,以便跟踪所有链接,但我遇到了相同的情况.没有迹象表明爬虫蜘蛛可以正常工作.

What I am doing, wrong here? Any ideas would be great help. I tried to leave SgmlLinkExtractor() empty that all links would be followed, but I get the same situation. There's no indication that crawler spider works at all.

我在 Python 2.7.2+ 上运行 Scrapy 0.16.2 版

I'm running Scrapy version 0.16.2 on Python 2.7.2+

推荐答案

Scrapy 误解了起始 url 的内容类型.

Scrapy is misinterpreting the content type of the start url.

你可以使用scrapy shell来验证这一点:

You can verify this by using scrapy shell:

$ scrapy shell 'http://www.euroleague.net/main' 
2013-11-18 16:39:26+0900 [scrapy] INFO: Scrapy 0.21.0 started (bot: scrapybot)
...

AttributeError: 'Response' object has no attribute 'body_as_unicode'

我之前的回答 关于缺少 body_as_unicode 属性.我注意到服务器没有设置任何内容类型标头.

See my previous answer about the missing body_as_unicode attribute. I notice that the server does not set any content-type header.

CrawlSpider /a>,因此不会处理响应,也不会遵循任何链接.

CrawlSpider ignores non-html responses, so the responses are not processed and no links are followed.

我建议在 github 上打开一个问题,因为我认为 Scrapy 应该能够透明地处理这种情况.

I would suggest opening a issue on github, as I think Scrapy should be able to handle this case transparently.

作为一种变通方法,您可以覆盖 CrawlSpider parse 方法,从传递的响应对象创建一个 HtmlResponse,并将其传递给超类 parse方法.

As a work around you could override the CrawlSpider parse method, create an HtmlResponse from the response object passed, and pass that to the superclass parse method.

这篇关于Scrapy 爬虫蜘蛛不遵循链接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆