Python日志记录和子进程输出和错误流 [英] Python logging and subprocess ouput and error stream

查看:342
本文介绍了Python日志记录和子进程输出和错误流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想开始一个python进程并将子进程错误消息记录到父脚本的日志记录对象中.理想情况下,我希望将日志流统一到一个文件中.我可以以某种方式访问​​日志记录类的输出流吗?我知道的一种解决方案是使用proc log进行日志记录.如下面的答案中所述,我可以从proc.stdin和stderr中读取内容,但是我会有重复的日志记录头.我想知道是否有一种方法可以将日志记录类下面的文件描述符直接传递给子进程?

I would like to start off a python process and log subprocess error messages to the logging object of the parent script. I would ideally like to unify the log streams into one file. Can I somehow access the output stream of the logging class? One solution I know of is to use proc log for logging. As described in the answer below, I could read from the proc.stdin and stderr, but I'd have duplicate logging headers. I wonder if there is a way to pass the file descriptor underlying the logging class directly to the subprocess?

logging.basicConfig(filename="test.log",level=logging.DEBUG)
logging.info("Started")
procLog = open(os.path.expanduser("subproc.log"), 'w')
proc = subprocess.Popen(cmdStr, shell=True, stderr=procLog, stdout=procLog)
proc.wait()
procLog.flush()

推荐答案

基于 Adam Rosenfield的代码,您可以

  1. 使用select.select进行阻止,直到要读取的输出为止 proc.stdoutproc.stderr
  2. 先读取并记录该输出,然后
  3. 重复执行直到该过程完成.
  1. use select.select to block until there is output to be read from proc.stdout or proc.stderr,
  2. read and log that output, then
  3. repeat until the process is done.

请注意,以下内容将写入/tmp/test.log并运行命令ls -laR /tmp.进行更改以适合您的需求.

Note that the following writes to /tmp/test.log and runs the command ls -laR /tmp. Change to suit your needs.

(PS:通常,/tmp包含普通用户无法读取的目录,因此运行ls -laR /tmp会同时向stdout和stderr输出输出.下面的代码在生成这两个流时正确地交织了它们.)

(PS: Typically /tmp contains directories which can not be read by normal users, so running ls -laR /tmp produces output to both stdout and stderr. The code below correctly interleaves those two streams as they are produced.)

import logging
import subprocess
import shlex
import select
import fcntl
import os
import errno
import contextlib

logger = logging.getLogger(__name__)

def make_async(fd):
    '''add the O_NONBLOCK flag to a file descriptor'''
    fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)

def read_async(fd):
    '''read some data from a file descriptor, ignoring EAGAIN errors'''
    try:
        return fd.read()
    except IOError, e:
        if e.errno != errno.EAGAIN:
            raise e
        else:
            return ''

def log_fds(fds):
    for fd in fds:
        out = read_async(fd)
        if out:
            logger.info(out)

@contextlib.contextmanager
def plain_logger():
    root = logging.getLogger()    
    hdlr = root.handlers[0]
    formatter_orig = hdlr.formatter
    hdlr.setFormatter(logging.Formatter('%(message)s'))
    yield 
    hdlr.setFormatter(formatter_orig)

def main():
    # fmt = '%(name)-12s: %(levelname)-8s %(message)s'
    logging.basicConfig(filename = '/tmp/test.log', mode = 'w',
                        level = logging.DEBUG)

    logger.info("Started")
    cmdStr = 'ls -laR /tmp'

    with plain_logger():
        proc = subprocess.Popen(shlex.split(cmdStr),
                                stdout = subprocess.PIPE, stderr = subprocess.PIPE)
        # without `make_async`, `fd.read` in `read_async` blocks.
        make_async(proc.stdout)
        make_async(proc.stderr)
        while True:
            # Wait for data to become available 
            rlist, wlist, xlist = select.select([proc.stdout, proc.stderr], [], [])
            log_fds(rlist)
            if proc.poll() is not None:
                # Corner case: check if more output was created
                # between the last call to read_async and now                
                log_fds([proc.stdout, proc.stderr])                
                break

    logger.info("Done")

if __name__ == '__main__':
    main()


您可以将stdoutstderr重定向到logfile = open('/tmp/test.log', 'a'). 但是,这样做的一个小困难是,任何也在写入/tmp/test.log的记录程序处理程序将不知道该子进程正在写入什么,因此日志文件可能会出现乱码.

You can redirect stdout and stderr to logfile = open('/tmp/test.log', 'a'). A small difficulty with doing so, however, is that any logger handler that is also writing to /tmp/test.log will not be aware of what the subprocess is writing, and so the log file may get garbled.

如果在子流程进行业务时不进行日志记录调用,那么唯一的问题是子流程完成后,记录程序处理程序在文件中的位置错误.可以通过致电来解决

If you do not make logging calls while the subprocess is doing its business, then the only problem is that the logger handler has the wrong position in the file after the subprocess has finished. That can be fixed by calling

handler.stream.seek(0, 2)

因此处理程序将在文件末尾恢复写操作.

so the handler will resume writing at the end of the file.

import logging
import subprocess
import contextlib
import shlex

logger = logging.getLogger(__name__)

@contextlib.contextmanager
def suspended_logger():
    root = logging.getLogger()    
    handler = root.handlers[0]
    yield 
    handler.stream.seek(0, 2)

def main():
    logging.basicConfig(filename = '/tmp/test.log', filemode = 'w',
                        level = logging.DEBUG)

    logger.info("Started")
    with suspended_logger():
        cmdStr = 'test2.py 1>>/tmp/test.log 2>&1'
        logfile = open('/tmp/test.log', 'a')
        proc = subprocess.Popen(shlex.split(cmdStr),
                                stdout = logfile,
                                stderr = logfile)
        proc.communicate()
    logger.info("Done")

if __name__ == '__main__':
    main()

这篇关于Python日志记录和子进程输出和错误流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆