为什么在尝试抓取和解析站点时,scrapy 会为我抛出错误? [英] Why does scrapy throw an error for me when trying to spider and parse a site?

查看:42
本文介绍了为什么在尝试抓取和解析站点时,scrapy 会为我抛出错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以下代码

class SiteSpider(BaseSpider):
    name = "some_site.com"
    allowed_domains = ["some_site.com"]
    start_urls = [
        "some_site.com/something/another/PRODUCT-CATEGORY1_10652_-1__85667",
    ]
    rules = (
        Rule(SgmlLinkExtractor(allow=('some_site.com/something/another/PRODUCT-CATEGORY_(.*)', ))),

        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(SgmlLinkExtractor(allow=('some_site.com/something/another/PRODUCT-DETAIL(.*)', )), callback="parse_item"),
    )
    def parse_item(self, response):
.... parse stuff

抛出以下错误

Traceback (most recent call last):
  File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1174, in mainLoop
    self.runUntilCurrent()
  File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 796, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 318, in callback
    self._startRunCallbacks(result)
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 424, in _startRunCallbacks
    self._runCallbacks()
--- <exception caught here> ---
  File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 441, in _runCallbacks
    self.result = callback(self.result, *args, **kw)
  File "/usr/lib/pymodules/python2.6/scrapy/spider.py", line 62, in parse
    raise NotImplementedError
exceptions.NotImplementedError: 

当我将回调更改为解析"并将函数更改为解析"时,我没有收到任何错误,但没有抓取任何内容.我将其更改为parse_items",认为我可能会覆盖 parse 方法意外.也许我设置的链接提取器有误?

When I change the callback to "parse" and the function to "parse" i don't get any errors, but nothing is scraped. I changed it to "parse_items" thinking I might be overriding the parse method by accident. Perhaps I'm setting up the link extractor wrong?

我想要做的是解析 CATEGORY 页面上的每个 ITEM 链接.我这样做完全错了吗?

What I want to do is parse each ITEM link on the CATEGORY page. Am I doing this totally wrong?

推荐答案

我需要将 BaseSpider 更改为 CrawlSpider.感谢 srapy 用户!

I needed to change BaseSpider to CrawlSpider. Thanks srapy users!

http://groups.google.com/group/scrapy-users/browse_thread/线程/4adaba51f7bcd0af#

鲍勃,

也许如果你改变它可能会奏效从 BaseSpider 到 CrawlSpider?这BaseSpider 似乎没有实现规则,见:

Perhaps it might work if you change from BaseSpider to CrawlSpider? The BaseSpider seems not implement Rule, see:

http://doc.scrapy.org/topics/spiders.html?highlight=rule#scrapy.contr...

-M

这篇关于为什么在尝试抓取和解析站点时,scrapy 会为我抛出错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆