Scrapy - 反应堆不可重启 [英] Scrapy - Reactor not Restartable

查看:37
本文介绍了Scrapy - 反应堆不可重启的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

与:

from twisted.internet import reactor
from scrapy.crawler import CrawlerProcess

我总是成功地运行这个过程:

I've always ran this process sucessfully:

process = CrawlerProcess(get_project_settings())
process.crawl(*args)
# the script will block here until the crawling is finished
process.start() 

但由于我已将此代码移动到 web_crawler(self) 函数中,如下所示:

but since I've moved this code into a web_crawler(self) function, like so:

def web_crawler(self):
    # set up a crawler
    process = CrawlerProcess(get_project_settings())
    process.crawl(*args)
    # the script will block here until the crawling is finished
    process.start() 

    # (...)

    return (result1, result2) 

并开始使用类实例化调用该方法,例如:

and started calling the method using class instantiation, like:

def __call__(self):
    results1 = test.web_crawler()[1]
    results2 = test.web_crawler()[0]

并运行:

test()

我收到以下错误:

Traceback (most recent call last):
  File "test.py", line 573, in <module>
    print (test())
  File "test.py", line 530, in __call__
    artists = test.web_crawler()
  File "test.py", line 438, in web_crawler
    process.start() 
  File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 280, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 1194, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)
  File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 1174, in startRunning
    ReactorBase.startRunning(self)
  File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 684, in startRunning
    raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable

怎么了?

推荐答案

您无法重新启动反应器,但您应该能够通过创建一个单独的进程来运行更多次:

You cannot restart the reactor, but you should be able to run it more times by forking a separate process:

import scrapy
import scrapy.crawler as crawler
from multiprocessing import Process, Queue
from twisted.internet import reactor

# your spider
class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = ['http://quotes.toscrape.com/tag/humor/']

    def parse(self, response):
        for quote in response.css('div.quote'):
            print(quote.css('span.text::text').extract_first())


# the wrapper to make it run more times
def run_spider(spider):
    def f(q):
        try:
            runner = crawler.CrawlerRunner()
            deferred = runner.crawl(spider)
            deferred.addBoth(lambda _: reactor.stop())
            reactor.run()
            q.put(None)
        except Exception as e:
            q.put(e)

    q = Queue()
    p = Process(target=f, args=(q,))
    p.start()
    result = q.get()
    p.join()

    if result is not None:
        raise result

运行两次:

print('first run:')
run_spider(QuotesSpider)

print('\nsecond run:')
run_spider(QuotesSpider)

结果:

first run:
"The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid."
"A day without sunshine is like, you know, night."
...

second run:
"The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid."
"A day without sunshine is like, you know, night."
...

这篇关于Scrapy - 反应堆不可重启的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆