在Python守护进程中维护日志记录和/或stdout/stderr [英] Maintaining Logging and/or stdout/stderr in Python Daemon

查看:100
本文介绍了在Python守护进程中维护日志记录和/或stdout/stderr的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Python中创建守护进程的所有方法都涉及两次分支(对于Unix),然后关闭所有打开的文件描述符. (请参阅 http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/一个例子).

Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example).

这一切都足够简单,但是我似乎有一个问题.在我正在设置的生产机器上,守护程序正在中止-由于所有打开的文件描述符都已关闭,因此它是无提示的.我在调试当前问题时花了很多时间,想知道捕获和记录这些错误的正确方法是什么.

This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are.

设置日志以使其在守护进程后继续工作的正确方法是什么?守护程序化后,我是否仅再次调用logging.basicConfig()?捕获stdoutstderr的正确方法是什么?我对为什么所有文件都关闭的细节感到困惑.理想情况下,我的主要代码可以只调用daemon_start(pid_file)并且日志记录将继续工作.

What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig() a second time after daemonizing? What's the right way to capture stdout and stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file) and logging would continue to work.

推荐答案

我将python-daemon库用于守护程序行为.

I use the python-daemon library for my daemonization behavior.

此处描述的接口:

此处的实现:

它允许指定一个files_preserve参数,以指示在守护进程中不应关闭的所有文件描述符.

It allows specifying a files_preserve argument, to indicate any file descriptors that should not be closed when daemonizing.

如果需要在守护前后通过相同 Handler实例进行日志记录,则可以:

If you need logging via the same Handler instances before and after daemonizing, you can:

  1. 首先使用basicConfigdictConfig或其他方式设置您的日志处理程序.
  2. 记录内容
  3. 确定您的Handler所依赖的文件描述符.不幸的是,这取决于Handler子类.如果您第一次安装的HandlerStreamHandler,则为logging.root.handlers[0].stream.fileno()的值.如果第二次安装的HandlerSyslogHandler,则需要logging.root.handlers[1].socket.fileno()的值.等等.这很乱:-(
  4. 通过创建DaemonContext(其中files_preserve等于在步骤3中确定的文件描述符的列表)来守护进程.
  5. 继续记录;您的日志文件不应该在双叉期间关闭.
  1. First set up your logging Handlers using basicConfig or dictConfig or whatever.
  2. Log stuff
  3. Determine what file descriptors your Handlers depend on. Unfortunately this is dependent on the Handler subclass. If your first-installed Handler is a StreamHandler, it's the value of logging.root.handlers[0].stream.fileno(); if your second-installed Handler is a SyslogHandler, you want the value of logging.root.handlers[1].socket.fileno(); etc. This is messy :-(
  4. Daemonize your process by creating a DaemonContext with files_preserve equal to a list of the file descriptors you determined in step 3.
  5. Continue logging; your log files should not have been closed during the double-fork.

如@Exelian所建议的,替代方法可能是在守护进程之前和之后实际使用不同的Handler实例.守护进程完成后,立即销毁现有处理程序(通过dellogger.root.handlers?进行处理)并创建相同的新处理程序;由于@ dave-mankoff指出的问题,您不能只调用basicConfig.

An alternative might be, as @Exelian suggested, to actually use different Handler instances before and after the daemonziation. Immediately after daemonizing, destroy the existing handlers (by deling them from logger.root.handlers?) and create identical new ones; you can't just re-call basicConfig because of the issue that @dave-mankoff pointed out.

这篇关于在Python守护进程中维护日志记录和/或stdout/stderr的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆