从不包括管道的脚本运行scrapy [英] Running scrapy from script not including pipeline

查看:37
本文介绍了从不包括管道的脚本运行scrapy的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从脚本运行scrapy,但它所做的只是激活蜘蛛.它不会通过我的项目管道.我读过 http://scrapy.readthedocs.org/en/latest/topics/practices.html 但它没有说明包含管道的任何内容.

I'm running scrapy from a script but all it does is activate the spider. It doesn't go through my item pipeline. I've read http://scrapy.readthedocs.org/en/latest/topics/practices.html but it doesn't say anything about including pipelines.

我的设置:

Scraper/
    scrapy.cfg
    ScrapyScript.py
    Scraper/
        __init__.py
        items.py
        pipelines.py
        settings.py
        spiders/
            __init__.py
            my_spider.py

我的脚本:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from Scraper.spiders.my_spider import MySpiderSpider

spider = MySpiderSpider(domain='myDomain.com')
settings = get_project_settings
crawler = Crawler(Settings())
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Reactor activated...')
reactor.run()
log.msg('Reactor stopped.')

我的管道:

from scrapy.exceptions import DropItem
from scrapy import log
import sqlite3


class ImageCheckPipeline(object):

    def process_item(self, item, spider):
        if item['image']:
            log.msg("Item added successfully.")
            return item
        else:
            del item
            raise DropItem("Non-image thumbnail found: ")


class StoreImage(object):

    def __init__(self):
        self.db = sqlite3.connect('images')
        self.cursor = self.db.cursor()
        try:
            self.cursor.execute('''
                CREATE TABLE IMAGES(IMAGE BLOB, TITLE TEXT, URL TEXT)
            ''')
            self.db.commit()
        except sqlite3.OperationalError:
            self.cursor.execute('''
                DELETE FROM IMAGES
            ''')
            self.db.commit()

    def process_item(self, item, spider):
        title = item['title'][0]
        image = item['image'][0]
        url = item['url'][0]
        self.cursor.execute('''
            INSERT INTO IMAGES VALUES (?, ?, ?)
        ''', (image, title, url))
        self.db.commit()

脚本输出:

[name@localhost Scraper]$ python ScrapyScript.py
2014-08-06 17:55:22-0400 [scrapy] INFO: Reactor activated...
2014-08-06 17:55:22-0400 [my_spider] INFO: Closing spider (finished)
2014-08-06 17:55:22-0400 [my_spider] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 213,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 18852,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 8, 6, 21, 55, 22, 518492),
     'item_scraped_count': 51,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2014, 8, 6, 21, 55, 22, 363898)}
2014-08-06 17:55:22-0400 [my_spider] INFO: Spider closed (finished)
2014-08-06 17:55:22-0400 [scrapy] INFO: Reactor stopped.
[name@localhost Scraper]$ 

推荐答案

您实际上需要调用 get_project_settings,您在发布的代码中传递给爬虫的 Settings 对象将为您提供默认值,而不是您的特定项目设置.你需要这样写:

You need to actually call get_project_settings, Settings object that you are passing to your crawler in your posted code will give you defaults, not your specific project settings. You need to write something like this:

from scrapy.utils.project import get_project_settings
settings = get_project_settings()
crawler = Crawler(settings)

这篇关于从不包括管道的脚本运行scrapy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆