在多个网站上使用一个 Scrapy 蜘蛛 [英] Using one Scrapy spider for several websites

查看:39
本文介绍了在多个网站上使用一个 Scrapy 蜘蛛的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要创建一个用户可配置的网络蜘蛛/爬虫,我正在考虑使用 Scrapy.但是,我无法对域和允许的 URL regex:es 进行硬编码——这将在 GUI 中进行配置.

I need to create a user configurable web spider/crawler, and I'm thinking about using Scrapy. But, I can't hard-code the domains and allowed URL regex:es -- this will instead be configurable in a GUI.

我如何(尽可能简单)使用 Scrapy 创建一个蜘蛛或一组蜘蛛,其中域和允许的 URL regex:es 是动态配置的?例如.我将配置写入文件,蜘蛛以某种方式读取它.

How do I (as simple as possible) create a spider or a set of spiders with Scrapy where the domains and allowed URL regex:es are dynamically configurable? E.g. I write the configuration to a file, and the spider reads it somehow.

推荐答案

警告:此答案适用于 Scrapy v0.7,此后蜘蛛管理器 api 发生了很大变化.

覆盖默认的 SpiderManager 类,从数据库或其他地方加载您的自定义规则,并使用您自己的规则/正则表达式和域名实例化自定义蜘蛛

Override default SpiderManager class, load your custom rules from a database or somewhere else and instanciate a custom spider with your own rules/regexes and domain_name

在 mybot/settings.py 中:

in mybot/settings.py:

SPIDER_MANAGER_CLASS = 'mybot.spidermanager.MySpiderManager'

在 mybot/spidermanager.py 中:

in mybot/spidermanager.py:

from mybot.spider import MyParametrizedSpider

class MySpiderManager(object):
    loaded = True

    def fromdomain(self, name):
        start_urls, extra_domain_names, regexes = self._get_spider_info(name)
        return MyParametrizedSpider(name, start_urls, extra_domain_names, regexes)

    def close_spider(self, spider):
        # Put here code you want to run before spiders is closed
        pass

    def _get_spider_info(self, name):
        # query your backend (maybe a sqldb) using `name` as primary key, 
        # and return start_urls, extra_domains and regexes
        ...
        return (start_urls, extra_domains, regexes)

现在你的自定义蜘蛛类,在 mybot/spider.py 中:

and now your custom spider class, in mybot/spider.py:

from scrapy.spider import BaseSpider

class MyParametrizedSpider(BaseSpider):

    def __init__(self, name, start_urls, extra_domain_names, regexes):
        self.domain_name = name
        self.start_urls = start_urls
        self.extra_domain_names = extra_domain_names
        self.regexes = regexes

     def parse(self, response):
         ...

注意事项:

  • 如果您想利用它的规则系统,您也可以扩展 CrawlSpider
  • 要运行蜘蛛使用:./scrapy-ctl.py crawl ,其中 name 被传递给 SpiderManager.fromdomain 并且是检索的关键来自后端系统的更多蜘蛛信息
  • 由于解决方案会覆盖默认的 SpiderManager,因此无法对经典蜘蛛(每个 SPIDER 的一个 Python 模块)进行编码,但是,我认为这对您来说不是问题.有关默认蜘蛛管理器的更多信息 TwistedPluginSpiderManager
  • You can extend CrawlSpider too if you want to take advantage of its Rules system
  • To run a spider use: ./scrapy-ctl.py crawl <name>, where name is passed to SpiderManager.fromdomain and is the key to retreive more spider info from the backend system
  • As solution overrides default SpiderManager, coding a classic spider (a python module per SPIDER) doesn't works, but, I think this is not an issue for you. More info on default spiders manager TwistedPluginSpiderManager

这篇关于在多个网站上使用一个 Scrapy 蜘蛛的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆