Scrapy Spider 无限爬行 [英] Scrapy spider crawl infinite

查看:61
本文介绍了Scrapy Spider 无限爬行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

任务我的蜘蛛应该能够抓取整个域的每个链接,并且应该识别它是产品链接还是例如类别链接,但只将产品链接写入项目.

Task My spider should be able to crawl every link of the whole domain and should recognize, if its a productlink or for example a categorylink, but only writes productlinks to items.

我设置了一个规则,允许包含a-"的 URL因为它包含在每个产品链接中.

I set a rule which allows URLs containing "a-" because its contained in every productlink.

我的 if-condition 应该简单地检查一下,如果有 productean 列出,如果是,那么它的双重检查应该是一个 productlink

my if-condition should simply check, if there is productean listed, if yes, then its double checked and should be definitely a productlink

在这个过程之后,它应该将链接保存在我的列表中

After that process it should save link in my list

问题如果-a"不存在,Spider 会收集所有链接而不是解析链接.包含

Problem Spider collect all links instead of parsing links if "-a" is contained

已使用代码

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from ..items import LinkextractorItem

class TopArtSpider(CrawlSpider):
    name = "topart"
    allow_domains = ['topart-online.com']
    start_urls = [
        'https://www.topart-online.com'
    ]
    custom_settings = {'FEED_EXPORT_FIELDS' : ['Link'] }

    rules = (
        Rule(LinkExtractor(allow='/a-'), callback='parse_filter_item', follow=True),
    )

    def parse_filter_item(self, response):
        exists = response.xpath('.//div[@class="producteant"]').get()
        link = response.xpath('//a/@href')
        if exists:
            response.follow(url=link.get(), callback=self.parse)

        for a in link:   
            items = LinkextractorItem()
            items['Link'] = a.get()
            yield items

推荐答案

# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule


class TopartSpider(CrawlSpider):
    name = 'topart'
    allowed_domains = ['topart-online.com']
    start_urls = ['http://topart-online.com/']

    rules = (
        Rule(LinkExtractor(allow=r'/a-'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        return {'Link': response.url}

这篇关于Scrapy Spider 无限爬行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆