我如何使用scrapy为crawlspider创建规则 [英] How do i create rules for a crawlspider using scrapy

查看:45
本文介绍了我如何使用scrapy为crawlspider创建规则的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from manga.items import MangaItem

class MangaHere(BaseSpider):
    name = "mangah"
    allowed_domains = ["mangahere.com"]
    start_urls = ["http://www.mangahere.com/seinen/"]

    def parse(self,response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//ul/li/div')
        items = []
        for site in sites:
            rating = site.select("p/span/text()").extract()
            if rating > 4.5:
                item = MangaItem()
                item["title"] = site.select("div/a/text()").extract()
                item["desc"] = site.select("p[2]/text()").extract()
                item["link"] = site.select("div/a/@href").extract()
                item["rate"] = site.select("p/span/text()").extract()
                items.append(item)

        return items

我的目标是抓取 www.mangahere.com/seinen 或该站点上的任何内容.我想浏览每一页并收集评分大于 4.5 的书籍.我从一个 basespider 开始,并尝试复制和阅读 scrapy 教程,但它几乎让我无法理解.我来这里是为了问我要做什么来创建我的规则,以及如何创建.我似乎也无法让我的条件起作用,代码要么只返回第一个项目,无论条件如何都停止,或者无论条件如何,再次抓取所有内容.我知道它的代码可能非常混乱,但我仍在努力学习.随意修改代码或提供其他建议

My goal is to crawl www.mangahere.com/seinen or anything on that site. I want to go through every page and collect books that are greater than a 4.5 rating. I started out as a basespider and tried copying and reading the scrapy tutorial but it pretty much went in over my head. I am here to ask what do i do to create my rules, and how. I also cant seem to get my condition to work, the code either only returns the very first item and stops regardless of condition, or grabs everything, again regardless of condition. I know its probably pretty messed up code but I am still struggling to learn. Feel free to touch up the code or offer other advice

推荐答案

严格来说,这并不能回答问题,因为我的代码使用了 BaseSpider 而不是 CrawlSpider>,但它确实满足了 OP 的要求,所以...

Strictly speaking, this isn't answering the question since my code uses a BaseSpider instead of a CrawlSpider, but it does fulfil the OP's requirement so...

注意事项:

  1. 由于所有分页链接都不可用(您得到前九个,然后是最后两个),我采用了一种有点hacktastic 的方法.使用 parse 回调中的第一个响应,我搜索一个类为next"的链接(只有一个,所以看看它对应哪个链接),然后找到它的紧接在兄弟姐妹之前.这使我可以掌握 seinen 类别中的总页数(目前为 45).
  2. 接下来,我们为要由 parse_item 回调处理的第一页生成一个 Request 对象.
  3. 然后,假设我们已经确定总共有 45 个页面,我们为./seinen/2.htm"生成一整套 Request 对象,一直到./seinen/45.htm".
  4. 由于 rating 是一个列表并且它的值是浮点数(我应该在条件为 4.5 的基础上意识到这一点),因此修复遇到的错误的方法是遍历列表评分并将每个项目转换为浮点数.
  1. Since all of the pagination links aren't available (you get the first nine and then the last two), I employed a somewhat hacktastic approach. Using the first response in the parse callback, I search for a link with a class of "next" (there's only one, so have a look to see which link it corresponds to), and then find its immediately preceding sibling. This gives me a handle on the total number of pages in the seinen category (currently 45).
  2. Next, we yield a Request object for the first page to be processed by the parse_item callback.
  3. Then, given that we have determined that there are 45 pages in total, we generate a whole series of Request objects for "./seinen/2.htm" all the way to "./seinen/45.htm".
  4. Since rating is a list and that its values are floats (which I should have realised on the basis that the condition is 4.5), the way to fix the error encountered is to loop through the list of ratings and cast each item to be a float.

无论如何,看看下面的代码,看看它是否有意义.从理论上讲,您应该能够轻松扩展此代码以抓取多个类别,尽管这留给 OP 练习.:)

Anyway, have a look at the following code and see if it makes sense. In theory you should be able to easily extend this code to scrape multiple categories, though that is left as an exercise for the OP. :)

from scrapy.spider import BaseSpider
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from tutorial.items import MangaItem
from urlparse import urlparse

class MangaHere(BaseSpider):
    name = "mangah2"
    start_urls = ["http://www.mangahere.com/seinen/"]
    allowed_domains = ["mangahere.com"]

    def parse(self, response):
        # get index depth ie the total number of pages for the category
        hxs = HtmlXPathSelector(response)
        next_link = hxs.select('//a[@class="next"]')
        index_depth = int(next_link.select('preceding-sibling::a[1]/text()').extract()[0])

        # create a request for the first page
        url = urlparse("http://www.mangahere.com/seinen/")
        yield Request(url.geturl(), callback=self.parse_item)

        # create a request for each subsequent page in the form "./seinen/x.htm"
        for x in xrange(2, index_depth):
            pageURL = "http://www.mangahere.com/seinen/%s.htm" % x
            url = urlparse(pageURL)
            yield Request(url.geturl(), callback=self.parse_item)

    def parse_item(self,response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//ul/li/div')
        items = []
        for site in sites:
            rating = site.select("p/span/text()").extract()
            for r in rating:
                if float(r) > 4.5:
                    item = MangaItem()
                    item["title"] = site.select("div/a/text()").extract()
                    item["desc"] = site.select("p[2]/text()").extract()
                    item["link"] = site.select("div/a/@href").extract()
                    item["rate"] = site.select("p/span/text()").extract()
                    items.append(item)
        return items

这篇关于我如何使用scrapy为crawlspider创建规则的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆