刮擦日志处理程序 [英] scrapy log handler

查看:39
本文介绍了刮擦日志处理程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在以下两个问题中寻求您的帮助 - 如何为不同的日志级别设置处理程序,例如在 python 中.目前,我有

I seek your help in the following 2 questions - How do I set the handler for the different log levels like in python. Currently, I have

STATS_ENABLED = True
STATS_DUMP = True 

LOG_FILE = 'crawl.log'

但是 Scrapy 生成的调试消息也会添加到日志文件中.这些很长,理想情况下,我希望 DEBUG 级别的消息保留在标准错误上,并将 INFO 消息转储到我的 LOG_FILE.

But the debug messages generated by Scrapy are also added into the log files. Those are very long and ideally, I would like the DEBUG level messages to left on standard error and INFO messages to be dump to my LOG_FILE.

其次,在文档中,它说日志服务必须通过scrapy.log.start()函数显式启动.我的问题是,我在哪里运行这个scrapy.log.start()?是在我的蜘蛛里面吗?

Secondly, in the docs, it says The logging service must be explicitly started through the scrapy.log.start() function. My question is, where do I run this scrapy.log.start()? Is it inside my spider?

推荐答案

嗯,

只是想更新一下,我可以通过使用将日志文件处理程序写入文件

Just wanted to update that I am able to get the logging file handler to file by using

from twisted.python import log
import logging
logging.basicConfig(level=logging.INFO, filemode='w', filename='log.txt'""")
observer = log.PythonLoggingObserver()
observer.start()

但是我无法让日志显示蜘蛛的名字,就像标准错误中的扭曲一样.我发布了这个问题.

however I am unable to get the log to display the spiders' name like from twisted in standard error. I posted this question.

这篇关于刮擦日志处理程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆