临时记录到特定的错误日志文件 [英] Logging to specific error log file in scrapy

查看:741
本文介绍了临时记录到特定的错误日志文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在通过以下方式运行刮擦日志:

I am running a log of scrapy by doing this:

from scrapy import log
class MySpider(BaseSpider):
  name = "myspider"  

  def __init__(self, name=None, **kwargs):
        LOG_FILE = "logs/spider.log"
        log.log.defaultObserver = log.log.DefaultObserver()
        log.log.defaultObserver.start()
        log.started = False
        log.start(LOG_FILE, loglevel=log.INFO)
        super(MySpider, self).__init__(name, **kwargs)

    def parse(self,response):
        ....
        raise Exception("Something went wrong!")
        log.msg('Something went wrong!', log.ERROR)

        # Somehow write to a separate error log here.

然后我像这样跑蜘蛛:

scrapy crawl myspider

这会将所有log.INFO数据以及log.ERROR存储到spider.log.

This would store all the log.INFO data as well as log.ERROR into spider.log.

如果发生错误,我还想将这些详细信息存储在名为spider_errors.log的单独的日志文件中.相比尝试扫描整个spider.log文件(可能会很大),这将使搜索发生的错误更加容易.

If an error occurs, I would also like to store those details in a separate log file called spider_errors.log. It would make it easier to search for errors that occurred rather than trying to scan through the entire spider.log file (which could be huge).

有没有办法做到这一点?

Is there a way to do this?

尝试使用PythonLoggingObserver:

Trying with PythonLoggingObserver:

def __init__(self, name=None, **kwargs):
        LOG_FILE = 'logs/spider.log'
        ERR_File = 'logs/spider_error.log'

        observer = log.log.PythonLoggingObserver()
        observer.start()

        log.started = False     
        log.start(LOG_FILE, loglevel=log.INFO)
        log.start(ERR_FILE, loglevel=log.ERROR)

但是我得到了ERROR: No handlers could be found for logger "twisted"

推荐答案

只需记录做好这项工作.尝试使用PythonLoggingObserver代替DefaultObserver:

Just let logging do the job. Try to use PythonLoggingObserver instead of DefaultObserver:

  • 直接在python中或通过fileconfig或dictconfig配置两个记录器(一个用于INFO消息,一个用于ERROR消息)(请参阅
  • configure two loggers (one for INFO and one for ERROR messages) directly in python, or via fileconfig, or via dictconfig (see docs)
  • start it in spider's __init__:

def __init__(self, name=None, **kwargs):
    # TODO: configure logging: e.g. logging.config.fileConfig("logging.conf")
    observer = log.PythonLoggingObserver()
    observer.start()

让我知道您是否需要配置记录器方面的帮助.

Let me know if you need help with configuring loggers.

另一种选择是在__init__.py中启动两个文件日志观察器:

Another option is to start two file log observers in __init__.py:

from scrapy.log import ScrapyFileLogObserver
from scrapy import log


class MySpider(BaseSpider):
    name = "myspider"  

    def __init__(self, name=None, **kwargs):
        ScrapyFileLogObserver(open("spider.log", 'w'), level=logging.INFO).start()
        ScrapyFileLogObserver(open("spider_error.log", 'w'), level=logging.ERROR).start()

        super(MySpider, self).__init__(name, **kwargs)

    ...

这篇关于临时记录到特定的错误日志文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆