Scrapy NotImplementedError [英] Scrapy NotImplementedError

查看:26
本文介绍了Scrapy NotImplementedError的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在从各个页面抓取数据之前获取一些链接,但我收到了 NotImplementedError - 回溯如下:

I am trying to fetch some links before scraping data from the individual pages but am getting the NotImplementedError - traceback below:

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 588, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 76, in parse
    raise NotImplementedError
NotImplementedError
2017-10-13 06:03:58 [scrapy] INFO: Closing spider (finished)
2017-10-13 06:03:58 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 273,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 81464,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 10, 13, 5, 3, 58, 550062),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/NotImplementedError': 1,
 'start_time': datetime.datetime(2017, 10, 13, 5, 3, 56, 552516)}
2017-10-13 06:03:58 [scrapy] INFO: Spider closed (finished)

我已经尝试过:

DOWNLOAD_HANDLERS = {'s3': None,} 添加到 settings.py

这似乎没有做任何事情,然后我从 crapy.Spider 切换到 scrapy.spiders.CrawlSpider 没有抛出错误消息,但是,它也不会打印出我的 final_url - 如果设置正确,我认为它应该正确吗?我的代码如下:

Which didn't seem to do anything, I then switched to from crapy.Spider to scrapy.spiders.CrawlSpider which did not throw the error messages, however, it also does not print out my final_url - am I correct in thinking that it should if this is set up correctly? My code below:

# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.http import Request
import scrapy


class Test_spider(scrapy.spiders.CrawlSpider):
    name = "Spider_Testing"
    allowed_domains = ["http://www.example.com/"]
    start_urls = (
                       "http://www.example.com/followthrough",
    )

    def parse_links(self, response): 
            links = response.xpath('//form/table/tr/td/table//a[div]/@href').extract() 
            for link in links:
                base_url = "http://www.example.com/followthrough" # the full addresss after/ is slightly different than start urls but that should not matter?
                final_url = response.urljoin(base_url, links)
                print(final_url) #test 1
                print(Request(final_url, callback=self.parse_final)) #test 2
                yield Request(final_url, callback=self.parse_final)

    def parse_final(self, response):
        pass

所以我的问题是:

  • 这是正确的逻辑吗?
  • 我的 final_url 测试打印是否正确?- 我在想 #1 不是 #2
  • Is this the correct logic?
  • Is my test print for the final_url correct? - I am thinking #1 is not #2

推荐答案

错误来自缺少 parse 方法.由于您没有实现 start_requests,它的默认行为是:

The error comes from missing parse method. Since you don't implement the start_requests, its default behavior is:

默认实现生成Request(url, dont_filter=True)对于 start_urls 中的每个 url.

The default implementation generates Request(url, dont_filter=True) for each url in start_urls.

它没有设置回调参数,所以它会尝试调用parse作为默认:

It doesn't set the callback parameter, so it will try to call parse as default:

如果请求没有指定回调,蜘蛛的 parse() 方法将会被使用.请注意,如果在处理过程中引发异常,errback 被调用.

If a Request doesn’t specify a callback, the spider’s parse() method will be used. Note that if exceptions are raised during processing, errback is called instead.

您可以通过实施 starts_requests 来修复它 并指定回调参数:

You can fix it by implementing starts_requests and specify the callback parameter:

def start_requests:
    yield Request(start_url, callback=parse_links)

更新:
response.urljoin(url) 只接收一个参数:

通过将 Response 的 url 与可能的相对网址.

Constructs an absolute url by combining the Response’s url with a possible relative url.

您应该使用 response.urljoin(link)urlparse.urljoin(base_url, link).还要确保这里的链接是相对网址.

You should use response.urljoin(link) or urlparse.urljoin(base_url, link). And also make sure that the links here are relative urls.

更新 2:
您可以添加以下代码并运行它:

Update2:
You can add following code and run it:

if __name__ == '__main__':
    from scrapy.crawler import CrawlerProcess
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
    })

    process.crawl(Test_spider)
    process.start()

它允许您从脚本运行scrapy,因此您可以使用ipdb 或IDE 中的调试工具进入它.

It allows you to run scrapy from a script, so you can use ipdb or debug tools in IDE to step into it.

这篇关于Scrapy NotImplementedError的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆