Python,想要通过日志轮换和压缩进行日志记录 [英] Python, want logging with log rotation and compression
问题描述
任何人都可以在python中建议一种使用以下方式进行日志记录的方法:
Can anyone suggest a way in python to do logging with:
- 每天日志轮换
- 旋转日志时的压缩
- 可选-删除最早的日志文件以保留X MB的可用空间
- 可选-sftp日志文件到服务器
感谢您的任何回复, 弗雷德
Thanks for any responses, Fred
推荐答案
- 每天的日志轮换:使用 TimedRotatingFileHandler
- 压缩日志:设置
encoding='bz2'
参数. (请注意,此技巧"仅适用于Python2.'bz2'不再被视为Python3中的编码.) - 可选-删除最早的日志文件以保留X MB的可用空间.
您可以(间接)使用 RotatingFileHandler 进行安排.通过设置
maxBytes
参数,日志文件达到特定大小时将翻转.通过设置backupCount
参数,您可以控制保留多少翻转.这两个参数一起使您可以控制日志文件占用的最大空间.您可能可以将TimeRotatingFileHandler
子类化,也可以将此行为纳入其中. - log rotation every day: Use a TimedRotatingFileHandler
- compression of logs: Set the
encoding='bz2'
parameter. (Note this "trick" will only work for Python2. 'bz2' is no longer considered an encoding in Python3.) - optional - delete oldest log file to preserve X MB of free space.
You could (indirectly) arrange this using a RotatingFileHandler. By setting the
maxBytes
parameter, the log file will rollover when it reaches a certain size. By setting thebackupCount
parameter, you can control how many rollovers are kept. The two parameters together allow you to control the maximum space consumed by the log files. You could probably subclass theTimeRotatingFileHandler
to incorporate this behavior into it as well.
只是为了好玩,这是子类TimeRotatingFileHandler
的方式.当您运行以下脚本时,它将把日志文件写入/tmp/log_rotate*
.
Just for fun, here is how you could subclass TimeRotatingFileHandler
. When you run the script below, it will write log files to /tmp/log_rotate*
.
使用较小的time.sleep
值(例如0.1),日志文件会迅速填满,达到maxBytes限制,然后将其翻转.
With a small value for time.sleep
(such as 0.1), the log files fill up quickly, reach the maxBytes limit, and are then rolled over.
对于较大的time.sleep
(例如1.0),日志文件填充缓慢,没有达到maxBytes限制,但是无论如何,当达到定时间隔(10秒)时,日志文件都会翻转.
With a large time.sleep
(such as 1.0), the log files fill up slowly, the maxBytes limit is not reached, but they roll over anyway when the timed interval (of 10 seconds) is reached.
以下所有代码均来自 logging/handlers.py .我只是以最简单的方式将TimeRotatingFileHandler与RotatingFileHandler网格化.
All the code below comes from logging/handlers.py. I simply meshed TimeRotatingFileHandler with RotatingFileHandler in the most straight-forward way possible.
import time
import re
import os
import stat
import logging
import logging.handlers as handlers
class SizedTimedRotatingFileHandler(handlers.TimedRotatingFileHandler):
"""
Handler for logging to a set of files, which switches from one file
to the next when the current file reaches a certain size, or at certain
timed intervals
"""
def __init__(self, filename, maxBytes=0, backupCount=0, encoding=None,
delay=0, when='h', interval=1, utc=False):
handlers.TimedRotatingFileHandler.__init__(
self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes = maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
# due to non-posix-compliant Windows feature
self.stream.seek(0, 2)
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
return 0
def demo_SizedTimedRotatingFileHandler():
log_filename = '/tmp/log_rotate'
logger = logging.getLogger('MyLogger')
logger.setLevel(logging.DEBUG)
handler = SizedTimedRotatingFileHandler(
log_filename, maxBytes=100, backupCount=5,
when='s', interval=10,
# encoding='bz2', # uncomment for bz2 compression
)
logger.addHandler(handler)
for i in range(10000):
time.sleep(0.1)
logger.debug('i=%d' % i)
demo_SizedTimedRotatingFileHandler()
这篇关于Python,想要通过日志轮换和压缩进行日志记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!