Scrapy,从第二组链接中抓取页面 [英] Scrapy, scrape pages from second set of links

查看:29
本文介绍了Scrapy,从第二组链接中抓取页面的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我今天一直在浏览 Scrapy 文档并试图获得一个工作版本 - https://docs.scrapy.org/en/latest/intro/tutorial.html#our-first-spider - 一个真实世界的例子.我的示例略有不同,因为它有 2 个下一页,即

I've been going through the Scrapy documentation today and trying to get a working version of - https://docs.scrapy.org/en/latest/intro/tutorial.html#our-first-spider - on a real world example. My example is slightly different in that it has 2 next pages, i.e.

start_url > 城市页面 > 单位页面

start_url > city page > unit page

这是我要从中抓取数据的单位页面.

It is the unit pages I want to grab data from.

我的代码:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://www.unitestudents.com/',
            ]

    def parse(self, response):
        for quote in response.css('div.property-body'):
            yield {
                'name': quote.xpath('//span/a/text()').extract(),
                'type': quote.xpath('//div/h4/text()').extract(),
                'price_amens': quote.xpath('//div/p/text()').extract(),
                'distance_beds': quote.xpath('//li/p/text()').extract()
            }

            # Purpose is to crawl links of cities
            next_page = response.css('a.listing-item__link::attr(href)').extract_first()
            if next_page is not None:
                next_page = response.urljoin(next_page)
                yield scrapy.Request(next_page, callback=self.parse)

            # Purpose is to crawl links of units
            next_unit_page = response.css(response.css('a.text-highlight__inner::attr(href)').extract_first())
            if next_unit_page is not None:
                                          next_unit_page = response.urljoin(next_unit_page)
                                          yield scrapy.Request(next_unit_page, callback=self.parse)

但是当我运行它时,我得到:

But when I run this I get:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

所以我认为我的代码没有设置为检索上述流程中的链接,但我不确定如何最好地做到这一点?

So I am thinking my code is not set up to retrieve the links in the flow mentioned above, but am not sure how best to do that?

更新流程:

主页 > 城市页面 > 建筑页面 > 单位页面

Main page > City page > Building page > Unit page

它仍然是我想从中获取数据的单元页面.

It's still the unit page I want to get the data from.

更新代码:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://www.unitestudents.com/',
            ]

    def parse(self, response):
        for quote in response.css('div.site-wrapper'):
            yield {
                'area_name': quote.xpath('//div/ul/li/a/span/text()').extract(),
                'type': quote.xpath('//div/div/div/h1/span/text()').extract(),
                'period': quote.xpath('/html/body/div/div/section/div/form/h4/span/text()').extract(),
                'duration_weekly': quote.xpath('//html/body/div/div/section/div/form/div/div/em/text()').extract(),
                'guide_total': quote.xpath('//html/body/div/div/section/div/form/div/div/p/text()').extract(),              
                'amenities': quote.xpath('//div/div/div/ul/li/p/text()').extract(),              
            }

            # Purpose is to crawl links of cities
            next_page = response.xpath('//html/body/div/footer/div/div/div/ul/li/a[@class="listing-item__link"]/@href').extract()
            if next_page is not None:
                next_page = response.urljoin(next_page)
                yield scrapy.Request(next_page, callback=self.parse)

            # Purpose is to crawl links of units
            next_unit_page = response.xpath('//li/div/h3/span/a/@href').extract()
            if next_unit_page is not None:
                                          next_unit_page = response.urljoin(next_unit_page)
                                          yield scrapy.Request(next_unit_page, callback=self.parse)

            # Purpose to crawl crawl pages on full unit info

            last_unit_page = response.xpath('//div/div/div[@class="content__btn"]/a/@href').extract()
            if last_unit_page is not None:
                last_unit_page = response.urljoin(last_unit_page)
                yield scrapy.Request(last_unit_page, callback=self.parse)

推荐答案

让我们从逻辑开始:

  1. 抓取主页 - 获取所有城市
  2. 抓取城市页面 - 获取所有单位网址
  3. 抓取单位页面 - 获取所有需要的数据

我已经举了一个例子,说明如何在下面的爬虫蜘蛛中实现这一点.我无法找到您在示例代码中提到的所有信息,但我希望代码足够清晰,让您了解它的作用以及如何添加您需要的信息.

I've made an example of how you could implement this in a scrapy spider below. I was not able to find all the info you mention in your example code, but I hope the code is clear enough for you to understand what it does and how to add the info you need.

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://www.unitestudents.com/',
            ]

    # Step 1
    def parse(self, response):
        for city in response.xpath('//select[@id="frm_homeSelect_city"]/option[not(contains(text(),"Select your city"))]/text()').extract(): # Select all cities listed in the select (exclude the "Select your city" option)
            yield scrapy.Request(response.urljoin("/"+city), callback=self.parse_citypage)

    # Step 2
    def parse_citypage(self, response):
        for url in response.xpath('//div[@class="property-header"]/h3/span/a/@href').extract(): #Select for each property the url
            yield scrapy.Request(response.urljoin(url), callback=self.parse_unitpage)

        # I could not find any pagination. Otherwise it would go here.

    # Step 3
    def parse_unitpage(self, response):
        unitTypes = response.xpath('//div[@class="room-type-block"]/h5/text()').extract() + response.xpath('//h4[@class="content__header"]/text()').extract()
        for unitType in unitTypes: # There can be multiple unit types so we yield an item for each unit type we can find.
            yield {
                'name': response.xpath('//h1/span/text()').extract_first(),
                'type': unitType,
                # 'price': response.xpath('XPATH GOES HERE'), # Could not find a price on the page
                # 'distance_beds': response.xpath('XPATH GOES HERE') # Could not find such info
            }

我认为代码非常干净和简单.评论应该阐明我为什么选择使用 for 循环.如果有什么不清楚的,请告诉我,我会尽力解释.

I think the code is pretty clean and simple. Comments should clarify why I chose to use the for loops. If something is not clear, let me know and I'll try to explain it.

这篇关于Scrapy,从第二组链接中抓取页面的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆