Scrapy:非阻塞暂停 [英] Scrapy: non-blocking pause

查看:101
本文介绍了Scrapy:非阻塞暂停的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有问题.我需要停止一个函数的执行一段时间,但不能停止整体解析的实现.也就是说,我需要一个非阻塞的暂停.

I have a problem. I need to stop the execution of a function for a while, but not stop the implementation of parsing as a whole. That is, I need a non-blocking pause.

看起来像:

class ScrapySpider(Spider):
    name = 'live_function'

    def start_requests(self):
        yield Request('some url', callback=self.non_stop_function)

    def non_stop_function(self, response):
        for url in ['url1', 'url2', 'url3', 'more urls']:
            yield Request(url, callback=self.second_parse_function)

        # Here I need some function for sleep only this function like time.sleep(10)

        yield Request('some url', callback=self.non_stop_function)  # Call itself

    def second_parse_function(self, response):
        pass

函数 non_stop_function 需要停止一段时间,但不应阻塞其余的输出.

Function non_stop_function needs to be stopped for a while, but it should not block the rest of the output.

如果我插入 time.sleep() - 它会停止整个解析器,但我不需要它.是否可以使用 twisted 或其他方式停止一个函数?

If I insert time.sleep() - it will stop the whole parser, but I don't need it. Is it possible to stop one function using twisted or something else?

原因:我需要创建一个非阻塞函数,每 n 秒解析一次网站页面.在那里她会得到 urls 并填充 10 秒钟.获取到的网址会继续工作,但主要功能需要休眠.

Reason: I need to create a non-blocking function that will parse the page of the website every n seconds. There she will get urls and fill for 10 seconds. URLs that have been obtained will continue to work, but the main feature needs to sleep.

更新:

感谢 TkTechviach.一个答案帮助我了解如何发出待处理的 Request,第二个答案是如何激活它.两个答案相互补充,我为 Scrapy 做了一个很好的非阻塞暂停:

Thanks to TkTech and viach. One answer helped me to understand how to make a pending Request, and the second is how to activate it. Both answers complement each other and I made an excellent non-blocking pause for Scrapy:

def call_after_pause(self, response):
    d = Deferred()
    reactor.callLater(10.0, d.callback, Request(
        'https://example.com/',
        callback=self.non_stop_function,
        dont_filter=True))
    return d

并根据我的请求使用此函数:

And use this function for my request:

yield Request('https://example.com/', callback=self.call_after_pause, dont_filter=True)

推荐答案

Request 对象具有 callback 参数,请尝试使用该参数.我的意思是,创建一个包含 self.second_parse_functionpauseDeferred.

Request object has callback parameter, try to use that one for the purpose. I mean, create a Deferred which wraps self.second_parse_function and pause.

这是我的脏且未测试的示例,已标记更改的行.

Here is my dirty and not tested example, changed lines are marked.

class ScrapySpider(Spider):
    name = 'live_function'

    def start_requests(self):
        yield Request('some url', callback=self.non_stop_function)

    def non_stop_function(self, response):

        parse_and_pause = Deferred()  # changed
        parse_and_pause.addCallback(self.second_parse_function) # changed
        parse_and_pause.addCallback(pause, seconds=10)  # changed

        for url in ['url1', 'url2', 'url3', 'more urls']:
            yield Request(url, callback=parse_and_pause)  # changed

        yield Request('some url', callback=self.non_stop_function)  # Call itself

    def second_parse_function(self, response):
        pass

如果该方法适合您,那么您可以创建一个函数,该函数根据规则构造一个 Deferred 对象.它可以通过如下方式实现:

If the approach works for you then you can create a function which constructs a Deferred object according to the rule. It could be implemented in the way like the following:

def get_perform_and_pause_deferred(seconds, fn, *args, **kwargs):
    d = Deferred()
    d.addCallback(fn, *args, **kwargs)
    d.addCallback(pause, seconds=seconds)
    return d

这里是可能的用法:

class ScrapySpider(Spider):
    name = 'live_function'

    def start_requests(self):
        yield Request('some url', callback=self.non_stop_function)

    def non_stop_function(self, response):
        for url in ['url1', 'url2', 'url3', 'more urls']:
            # changed
            yield Request(url, callback=get_perform_and_pause_deferred(10, self.second_parse_function))

        yield Request('some url', callback=self.non_stop_function)  # Call itself

    def second_parse_function(self, response):
        pass

这篇关于Scrapy:非阻塞暂停的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆