Scrapy 内部 API 未处理自定义信号 [英] Custom signal not being handled by Scrapy internal API

查看:53
本文介绍了Scrapy 内部 API 未处理自定义信号的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 Scrapy 扩展MyExtension"中处理自定义信号signalizers.item_extracted",该扩展在 Scrapy 启动时成功启用.这是我的代码:

I am trying to handle a custom signal 'signalizers.item_extracted' in a Scrapy extension 'MyExtension' which is successfully enabled when scrapy starts. Here is my code:

signalizers.py

# custom signals
item_extracted = object()
item_transformed = object()


class MyExtension(object):

def __init__(self):
    pass

@classmethod
def from_crawler(cls, crawler):
    # first check if the extension should be enabled and raise
    # NotConfigured otherwise
    if not crawler.settings.getbool('MYEXTENSION_ENABLED'):
        raise NotConfigured

    # instantiate the extension object
    ext = cls()

    # connect the extension object to signals
    crawler.signals.connect(ext.item_extracted, signal=item_extracted)

    # return the extension object
    return ext

def item_extracted(self, item, spider):
    #Do some stuff

然后我尝试发送signalizers.item_extracted"信号,但我认为没有处理,或者至少我既看不到实际输出也看不到调试它:

Then i try to send the 'signalizers.item_extracted' signal but i think is not handled, or at least i can not either see the actual output nor debug it :

在蜘蛛中:

SignalManager(dispatcher.Any).send_catch_log(
   signal=signalizers.item_extracted, 
   item=item, 
   spider=spider)

我在这里做错了吗?

推荐答案

在阅读了一些 Scrapy 的源代码后,我发现问题在于创建一个新的 Signal manager 实例而不是使用爬虫的实例:

After reading a few Scrapy's source code i figured out the problem was on creating a new Signal manager instance instead of using crawler's one:

spider.crawler.signals.send_catch_log(signal=signalizers.item_extracted, item=item, spider=spider)

现在由扩展正确处理

这篇关于Scrapy 内部 API 未处理自定义信号的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆