使用Google Cloud Functions时ReactorNotRestartable会刮擦 [英] ReactorNotRestartable with scrapy when using Google Cloud Functions

查看:65
本文介绍了使用Google Cloud Functions时ReactorNotRestartable会刮擦的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用Google Cloud Functions发送多个爬网请求.但是,我似乎收到了ReactorNotRestartable错误.从StackOverflow上的其他帖子(例如)中,我了解到这是因为它不是可以重新启动反应堆,特别是在执行循环时.

I am trying to send multiple crawl requests with Google Cloud Functions. However, I seem to be getting the ReactorNotRestartable error. From other posts on StackOverflow, such as this one, I understand that this comes because it is not possible to restart the reactor, in particular when doing a loop.

解决此问题的方法是将start()放在for循环之外.但是,对于Cloud Functions,这是不可能的,因为每个请求在技术上都应该独立.

The way to solve this is by putting the start() outside the for loop. However, with Cloud Functions this is not possible as each request should be technically independent.

CrawlerProcess是否已通过Cloud Functions缓存?如果是这样,我们如何消除这种行为.

Is the CrawlerProcess somehow cached with Cloud Functions? And if so, how can we remove this behaviour.

例如,我尝试将导入和初始化过程放在函数内部而不是外部,以防止缓存导入,但这不起作用:

I tried for instance to put the import and initialization process inside a function, instead of outside, to prevent the caching of imports, but that did not work:

# main.py

def run_single_crawl(data, context):
    from scrapy.crawler import CrawlerProcess
    process = CrawlerProcess()

    process.crawl(MySpider)
    process.start()

推荐答案

默认情况下,scrapy的异步特性不适用于Cloud Functions,因为我们需要一种阻止爬网的方法防止该函数过早返回,并防止该实例在进程终止之前被杀死.

By default, the asynchronous nature of scrapy is not going to work well with Cloud Functions, as we'd need a way to block on the crawl to prevent the function from returning early and the instance being killed before the process terminates.

相反,我们可以使用 scrapydo 以阻塞的方式运行您现有的蜘蛛:

Instead, we can use scrapydo to run your existing spider in a blocking fashion:

requirements.txt:

scrapydo

main.py:

import scrapy
import scrapydo

scrapydo.setup()


class MyItem(scrapy.Item):
    url = scrapy.Field()


class MySpider(scrapy.Spider):
    name = "example.com"
    allowed_domains = ["example.com"]
    start_urls = ["http://example.com/"]

    def parse(self, response):
        yield MyItem(url=response.url)


def run_single_crawl(data, context):
    results = scrapydo.run_spider(MySpider)

这也显示了一个简单的示例,说明如何从蜘蛛中生成一个或多个scrapy.Item并从爬网中收集结果,如果不使用scrapydo,这也将是一个挑战.

This also shows a simple example of how to yield one or more scrapy.Item from the spider and collect the results from the crawl, which would also be challenging to do if not using scrapydo.

还:确保已为项目启用计费.默认情况下,Cloud Functions无法发出出站请求,并且搜寻器将成功执行,但不返回任何结果.

Also: make sure that you have billing enabled for your project. By default Cloud Functions cannot make outbound requests, and the crawler will succeed, but return no results.

这篇关于使用Google Cloud Functions时ReactorNotRestartable会刮擦的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆