让scrapy蜘蛛抓取整个网站 [英] Get scrapy spider to crawl entire site

查看:41
本文介绍了让scrapy蜘蛛抓取整个网站的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用scrapy来抓取我拥有的旧网站,我使用下面的代码作为我的蜘蛛.我不介意为每个网页输出文件,或者包含其中所有内容的数据库.但是我确实需要能够让蜘蛛爬行整个事情,而我不必输入我目前必须做的每个网址

导入scrapy类 DmozSpider(scrapy.Spider):名称 = "dmoz"allowed_domains = ["www.example.com"]start_urls = [http://www.example.com/contactus"]定义解析(自我,响应):文件名 = response.url.split("/")[-2] + '.html'with open(filename, 'wb') as f:f.write(response.body)

解决方案

要抓取整个站点,您应该使用 CrawlSpider 而不是 scrapy.Spider

这是一个例子

为了您的目的,请尝试使用以下内容:

导入scrapy从 scrapy.spider 导入 CrawlSpider,规则从scrapy.linkextractors 导入LinkExtractor类 MySpider(CrawlSpider):名称 = 'example.com'allowed_domains = ['example.com']start_urls = ['http://www.example.com']规则 = (规则(LinkExtractor(), callback='parse_item', follow=True),)def parse_item(self, response):文件名 = response.url.split("/")[-2] + '.html'with open(filename, 'wb') as f:f.write(response.body)

另外,看看这篇文章 >

I am using scrapy to crawl old sites that I own, I am using the code below as my spider. I don't mind having files outputted for each webpage, or a database with all the content within that. But I do need to be able to have the spider crawl the whole thing with out me having to put in every single url that I am currently having to do

import scrapy

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["www.example.com"]
    start_urls = [
        "http://www.example.com/contactus"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2] + '.html'
        with open(filename, 'wb') as f:
            f.write(response.body)

解决方案

To crawl whole site you should use the CrawlSpider instead of the scrapy.Spider

Here's an example

For your purposes try using something like this:

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    rules = (
        Rule(LinkExtractor(), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        filename = response.url.split("/")[-2] + '.html'
        with open(filename, 'wb') as f:
            f.write(response.body)

Also, take a look at this article

这篇关于让scrapy蜘蛛抓取整个网站的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆