Scrapy 规则如何与爬行蜘蛛一起使用 [英] How do Scrapy rules work with crawl spider

查看:29
本文介绍了Scrapy 规则如何与爬行蜘蛛一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我很难理解scrapy crawl蜘蛛规则.我的示例无法正常工作,因此可能是两件事:

  1. 我不明白规则是如何运作的.
  2. 我形成了不正确的正则表达式,导致我无法获得所需的结果.

好的,这就是我想做的:

我想编写一个爬行蜘蛛,它将从 对于第二步,我希望蜘蛛从常规季节"部分提取链接.在这种情况下,可以说它应该是第 1 轮".我正在寻找的 HTML/href 是:

<a href="/main/results/by-date?seasoncode=E2001&gamenumber=1&phasetypecode=RS"

我构建的规则/正则表达式是:

Rule(SgmlLinkExtractor(allow=('seasoncode=Ed+&gamenumber=d+&phasetypecode=w+',)),follow=True),

步骤 3

现在我到达了页面 ( 在将每个响应返回给回调函数之前,爬虫从第一个开始循环遍历规则.在编写规则时,您应该牢记这一点.例如以下规则:

规则(Rule(SgmlLinkExtractor(allow=(r'/items',)), callback='parse_item',follow=True),Rule(SgmlLinkExtractor(allow=(r'/items/electronics',)), callback='parse_electronic_item',follow=True),)

第二条规则永远不会被应用,因为所有链接都将被第一条规则通过 parse_item 回调提取出来.scrapy.dupefilter.RFPDupeFilter 将过滤第二条规则的匹配项作为重复项.您应该使用拒绝来正确匹配链接:

规则(Rule(SgmlLinkExtractor(allow=(r'/items',)), deny=(r'/items/electronics',), callback='parse_item',follow=True),Rule(SgmlLinkExtractor(allow=(r'/items/electronics',)), callback='parse_electronic_item',follow=True),)

I have hard time to understand scrapy crawl spider rules. I have example that doesn't work as I would like it did, so it can be two things:

  1. I don't understand how rules work.
  2. I formed incorrect regex that prevents me to get results that I need.

OK here it is what I want to do:

I want to write crawl spider that will get all available statistics information from http://www.euroleague.net website. The website page that hosts all the information that I need for the start is here.

Step 1

First step what I am thinking is extract "Seasons" link(s) and fallow it. Here it is HTML/href that I am intending to match (I want to match all links in the "Seasons" section one by one, but I think that it will be easer to have one link as an example):

href="/main/results/by-date?seasoncode=E2001"

And here is a rule/regex that I created for it:

Rule(SgmlLinkExtractor(allow=('by-date?seasoncode=Ed+',)),follow=True),

Step 2

When I am brought by spider to the web page http://www.euroleague.net/main/results/by-date?seasoncode=E2001 for the second step I want that spider extracted link(s) from section "Regular season". At this case lets say it should be "Round 1". The HTML/href that I am looking for is:

<a href="/main/results/by-date?seasoncode=E2001&gamenumber=1&phasetypecode=RS"

And rule/regex that I constructed would be:

Rule(SgmlLinkExtractor(allow=('seasoncode=Ed+&gamenumber=d+&phasetypecode=w+',)),follow=True),

Step 3

Now I reached page (http://www.euroleague.net/main/results/by-date?seasoncode=E2001&gamenumber=1&phasetypecode=RS) I am ready to extract links that leads to the pages that has all the information that I need: I am looking for HTML/href:

href="/main/results/showgame?gamenumber=1&phasetypecode=RS&gamecode=4&seasoncode=E2001#!boxscore"

And my regex that has to follow would be:

Rule(SgmlLinkExtractor(allow=('gamenumber=d+&phasetypecode=w+&gamecode=d+&seasoncode=Ed+',)),callback='parse_item'),

The problem

I think that crawler should work something like this: That rules crawler is something like a loop. When first link is matched the crawler will follow to the "Step 2" page, than to "step 3" and after that it will extract data. After doing that it will return to "step 1" to match second link and start loop again to the point when there is no links in first step.

What I see from terminal it seems that crawler loops in "Step 1". It loops through all "Step 1" links, but doesn't involves "step 2"/"step 3" rules.

2014-02-28 00:20:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2000> (referer: http://  www.euroleague.net/main/results/by-date)
2014-02-28 00:20:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2001> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 00:20:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2002> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 00:20:32+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2003> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 00:20:33+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2004> (referer: http://www.euroleague.net/main/results/by-date)

After it loops through all the "Seasons" links it starts with links that I don't see, in any of three steps that I mentioned:

http://www.euroleague.net/main/results/by-date?gamenumber=23&phasetypecode=TS++++++++&seasoncode=E2013

And such link structure you can find only if you loop through all the links in "Step 2" without returning to the "Step 1" starting point.

The question would be: How rules work? Is it working step by step like I am intending it should work with this example or every rule has it's own loop and goes from rule to rule only after it's finished looping through the first rule?

That is how I see it. Of course it could be something wrong with my rules/regex and it is very possible.

And here is all what I am getting from the terminal:

scrapy crawl basketsp_test -o item6.xml -t xml
2014-02-28 01:09:20+0200 [scrapy] INFO: Scrapy 0.20.0 started (bot: basketbase)
2014-02-28 01:09:20+0200 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2014-02-28 01:09:20+0200 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'basketbase.spiders', 'FEED_FORMAT': 'xml', 'SPIDER_MODULES': ['basketbase.spiders'], 'FEED_URI': 'item6.xml', 'BOT_NAME': 'basketbase'}
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Enabled item pipelines: Basketpipeline3, Basketpipeline1db
2014-02-28 01:09:21+0200 [basketsp_test] INFO: Spider opened
2014-02-28 01:09:21+0200 [basketsp_test] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-02-28 01:09:21+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-02-28 01:09:21+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date> (referer: None)
2014-02-28 01:09:22+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:22+0200 [basketsp_test] DEBUG: Filtered duplicate request: <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2013> - no more duplicates will be shown (see DUPEFILTER_CLASS)
2014-02-28 01:09:22+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2000> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:23+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2001> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:23+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2002> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:24+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2003> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:24+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2004> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:25+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2005> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:26+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2006> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:26+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2007> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:27+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2008> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:27+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2009> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:28+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2010> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:29+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2011> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:29+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?seasoncode=E2012> (referer: http://www.euroleague.net/main/results/by-date)
2014-02-28 01:09:30+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=24&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:30+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=23&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:31+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=22&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:32+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=21&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:32+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=20&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:33+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=19&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:34+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=18&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:34+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=17&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:35+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=16&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:35+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=15&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:36+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=14&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:37+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=13&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:37+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=12&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:38+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=11&phasetypecode=TS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:39+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=10&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:39+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=9&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:40+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=8&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:40+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=7&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:41+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=6&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:42+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=5&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:42+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=4&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:43+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=3&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:44+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=2&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:44+0200 [basketsp_test] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date?gamenumber=1&phasetypecode=RS++++++++&seasoncode=E2013> (referer: http://www.euroleague.net/main/results/by-date?seasoncode=E2013)
2014-02-28 01:09:44+0200 [basketsp_test] INFO: Closing spider (finished)
2014-02-28 01:09:44+0200 [basketsp_test] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 13663,
     'downloader/request_count': 39,
     'downloader/request_method_count/GET': 39,
     'downloader/response_bytes': 527838,
     'downloader/response_count': 39,
     'downloader/response_status_count/200': 39,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 2, 27, 23, 9, 44, 569579),
     'log_count/DEBUG': 46,
     'log_count/INFO': 3,
     'request_depth_max': 2,
     'response_received_count': 39,
     'scheduler/dequeued': 39,
     'scheduler/dequeued/memory': 39,
     'scheduler/enqueued': 39,
     'scheduler/enqueued/memory': 39,
     'start_time': datetime.datetime(2014, 2, 27, 23, 9, 21, 111255)}
2014-02-28 01:09:44+0200 [basketsp_test] INFO: Spider closed (finished)

And here is a rules part from the crawler:

class Basketspider(CrawlSpider):
    name = "basketsp_test"
    download_delay = 0.5

    allowed_domains = ["www.euroleague.net"]
    start_urls = ["http://www.euroleague.net/main/results/by-date"]
    rules = (
        Rule(SgmlLinkExtractor(allow=('by-date?seasoncode=Ed+',)),follow=True),
        Rule(SgmlLinkExtractor(allow=('seasoncode=Ed+&gamenumber=d+&phasetypecode=w+',)),follow=True),
        Rule(SgmlLinkExtractor(allow=('gamenumber=d+&phasetypecode=w+&gamecode=d+&seasoncode=Ed+',)),callback='parse_item'),



)  

解决方案

You are right, according to the source code before returning each response to the callback function, the crawler loops over the Rules, starting, from the first. You should have it in mind, when you write the rules. For example the following rules:

rules(
        Rule(SgmlLinkExtractor(allow=(r'/items',)), callback='parse_item',follow=True),
        Rule(SgmlLinkExtractor(allow=(r'/items/electronics',)), callback='parse_electronic_item',follow=True),
     )

The second rule will never be applied since all the links will be extracted by the first rule with parse_item callback. The matches for the second rule will be filtered out as duplicates by the scrapy.dupefilter.RFPDupeFilter. You should use deny for correct matching of links:

rules(
        Rule(SgmlLinkExtractor(allow=(r'/items',)), deny=(r'/items/electronics',), callback='parse_item',follow=True),
        Rule(SgmlLinkExtractor(allow=(r'/items/electronics',)), callback='parse_electronic_item',follow=True),
     )

这篇关于Scrapy 规则如何与爬行蜘蛛一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆