BaseSpider 和 CrawlSpider 的区别 [英] Difference between BaseSpider and CrawlSpider

查看:80
本文介绍了BaseSpider 和 CrawlSpider 的区别的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直试图理解在网页抓取中使用 BaseSpider 和 CrawlSpider 的概念.我已经阅读了 docs. 但是没有提及 BaseSpider.如果有人解释 BaseSpiderCrawlSpider 之间的区别,那对我真的很有帮助.

I have been trying to understand the concept of using BaseSpider and CrawlSpider in web scraping. I have read the docs. But there is no mention on BaseSpider. It would be really helpful to me if someone explain the differences between BaseSpider and CrawlSpider.

推荐答案

BaseSpider 是以前存在的东西,现在被弃用(因为 0.22) - 使用 scrapy.Spider 代替:

BaseSpider is something existed before and now is deprecated (since 0.22) - use scrapy.Spider instead:

import scrapy

class MySpider(scrapy.Spider):
    # ...

scrapy.Spider 是最简单的基本上,蜘蛛会访问 start_urls 中定义的 URL 或由 start_requests() 返回的 URL.

scrapy.Spider is the simplest spider that would, basically, visit the URLs defined in start_urls or returned by start_requests().

当您需要爬行"时,请使用 CrawlSpider" 行为 - 提取链接并关注它们:

Use CrawlSpider when you need a "crawling" behavior - extracting the links and following them:

这是最常用的爬取常规网站的蜘蛛,因为它通过定义一个方便的机制来跟踪链接一套规则.它可能不是最适合您的特定网络网站或项目,但它对于几种情况来说已经足够通用了,所以你可以从它开始并根据需要覆盖它以获得更多自定义功能,或者只是实现您自己的蜘蛛.

This is the most commonly used spider for crawling regular websites, as it provides a convenient mechanism for following links by defining a set of rules. It may not be the best suited for your particular web sites or project, but it’s generic enough for several cases, so you can start from it and override it as needed for more custom functionality, or just implement your own spider.

这篇关于BaseSpider 和 CrawlSpider 的区别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆