Python,希望通过日志轮换和压缩进行日志记录 [英] Python, want logging with log rotation and compression
问题描述
任何人都可以建议一种在 python 中进行日志记录的方法:
Can anyone suggest a way in python to do logging with:
- 每天轮换日志
- 旋转时压缩日志
- 可选 - 删除最旧的日志文件以保留 X MB 的可用空间
- 可选 - sftp 日志文件到服务器
感谢您的任何回复,弗雷德
Thanks for any responses, Fred
推荐答案
- 每天轮换日志:使用 TimedRotatingFileHandler
- 日志压缩:设置
encoding='bz2'
参数.(请注意,此技巧"仅适用于 Python2.'bz2' 不再被视为 Python3 中的编码.) - 可选 - 删除最旧的日志文件以保留 X MB 的可用空间.您可以(间接)使用 RotatingFileHandler 来安排它.通过设置
maxBytes
参数,日志文件在达到一定大小时会翻转.通过设置backupCount
参数,您可以控制保留的翻转次数.这两个参数一起允许您控制日志文件消耗的最大空间.您可能也可以将TimeRotatingFileHandler
子类化,以将这种行为也合并到其中. - log rotation every day: Use a TimedRotatingFileHandler
- compression of logs: Set the
encoding='bz2'
parameter. (Note this "trick" will only work for Python2. 'bz2' is no longer considered an encoding in Python3.) - optional - delete oldest log file to preserve X MB of free space.
You could (indirectly) arrange this using a RotatingFileHandler. By setting the
maxBytes
parameter, the log file will rollover when it reaches a certain size. By setting thebackupCount
parameter, you can control how many rollovers are kept. The two parameters together allow you to control the maximum space consumed by the log files. You could probably subclass theTimeRotatingFileHandler
to incorporate this behavior into it as well.
只是为了好玩,这里是您如何子类化 TimeRotatingFileHandler
.当您运行下面的脚本时,它会将日志文件写入 /tmp/log_rotate*
.
Just for fun, here is how you could subclass TimeRotatingFileHandler
. When you run the script below, it will write log files to /tmp/log_rotate*
.
time.sleep
的值很小(例如 0.1),日志文件会很快填满,达到 maxBytes 限制,然后翻转.
With a small value for time.sleep
(such as 0.1), the log files fill up quickly, reach the maxBytes limit, and are then rolled over.
如果time.sleep
很大(比如1.0),日志文件填的很慢,没有达到maxBytes限制,但在定时间隔(10秒)时无论如何都会翻转已到达.
With a large time.sleep
(such as 1.0), the log files fill up slowly, the maxBytes limit is not reached, but they roll over anyway when the timed interval (of 10 seconds) is reached.
以下所有代码均来自logging/handlers.py.我只是以最直接的方式将 TimeRotatingFileHandler 与 RotatingFileHandler 网格化.
All the code below comes from logging/handlers.py. I simply meshed TimeRotatingFileHandler with RotatingFileHandler in the most straight-forward way possible.
import time
import re
import os
import stat
import logging
import logging.handlers as handlers
class SizedTimedRotatingFileHandler(handlers.TimedRotatingFileHandler):
"""
Handler for logging to a set of files, which switches from one file
to the next when the current file reaches a certain size, or at certain
timed intervals
"""
def __init__(self, filename, maxBytes=0, backupCount=0, encoding=None,
delay=0, when='h', interval=1, utc=False):
handlers.TimedRotatingFileHandler.__init__(
self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes = maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s
" % self.format(record)
# due to non-posix-compliant Windows feature
self.stream.seek(0, 2)
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
return 0
def demo_SizedTimedRotatingFileHandler():
log_filename = '/tmp/log_rotate'
logger = logging.getLogger('MyLogger')
logger.setLevel(logging.DEBUG)
handler = SizedTimedRotatingFileHandler(
log_filename, maxBytes=100, backupCount=5,
when='s', interval=10,
# encoding='bz2', # uncomment for bz2 compression
)
logger.addHandler(handler)
for i in range(10000):
time.sleep(0.1)
logger.debug('i=%d' % i)
demo_SizedTimedRotatingFileHandler()
这篇关于Python,希望通过日志轮换和压缩进行日志记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!