在Python守护进程中维护日志记录和/或stdout/stderr [英] Maintaining Logging and/or stdout/stderr in Python Daemon
问题描述
我在Python中创建守护进程的所有方法都涉及两次分支(对于Unix),然后关闭所有打开的文件描述符. (请参阅 http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/一个例子).
Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example).
这一切都足够简单,但是我似乎有一个问题.在我正在设置的生产机器上,守护程序正在中止-由于所有打开的文件描述符都已关闭,因此它是无提示的.我在调试当前问题时花了很多时间,想知道捕获和记录这些错误的正确方法是什么.
This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are.
设置日志以使其在守护进程后继续工作的正确方法是什么?守护程序化后,我是否仅再次调用logging.basicConfig()
?捕获stdout
和stderr
的正确方法是什么?我对为什么所有文件都关闭的细节感到困惑.理想情况下,我的主要代码可以只调用daemon_start(pid_file)
并且日志记录将继续工作.
What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig()
a second time after daemonizing? What's the right way to capture stdout
and stderr
? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file)
and logging would continue to work.
推荐答案
我将python-daemon
库用于守护程序行为.
I use the python-daemon
library for my daemonization behavior.
此处描述的接口:
此处的实现:
它允许指定一个files_preserve
参数,以指示在守护进程中不应关闭的所有文件描述符.
It allows specifying a files_preserve
argument, to indicate any file descriptors that should not be closed when daemonizing.
如果需要在守护前后通过相同 Handler
实例进行日志记录,则可以:
If you need logging via the same Handler
instances before and after daemonizing, you can:
- 首先使用
basicConfig
或dictConfig
或其他方式设置您的日志处理程序. - 记录内容
- 确定您的
Handler
所依赖的文件描述符.不幸的是,这取决于Handler
子类.如果您第一次安装的Handler
是StreamHandler
,则为logging.root.handlers[0].stream.fileno()
的值.如果第二次安装的Handler
是SyslogHandler
,则需要logging.root.handlers[1].socket.fileno()
的值.等等.这很乱:-( - 通过创建
DaemonContext
(其中files_preserve
等于在步骤3中确定的文件描述符的列表)来守护进程. - 继续记录;您的日志文件不应该在双叉期间关闭.
- First set up your logging Handlers using
basicConfig
ordictConfig
or whatever. - Log stuff
- Determine what file descriptors your
Handler
s depend on. Unfortunately this is dependent on theHandler
subclass. If your first-installedHandler
is aStreamHandler
, it's the value oflogging.root.handlers[0].stream.fileno()
; if your second-installedHandler
is aSyslogHandler
, you want the value oflogging.root.handlers[1].socket.fileno()
; etc. This is messy :-( - Daemonize your process by creating a
DaemonContext
withfiles_preserve
equal to a list of the file descriptors you determined in step 3. - Continue logging; your log files should not have been closed during the double-fork.
如@Exelian所建议的,替代方法可能是在守护进程之前和之后实际使用不同的Handler
实例.守护进程完成后,立即销毁现有处理程序(通过del
从logger.root.handlers
?进行处理)并创建相同的新处理程序;由于@ dave-mankoff指出的问题,您不能只调用basicConfig
.
An alternative might be, as @Exelian suggested, to actually use different Handler
instances before and after the daemonziation. Immediately after daemonizing, destroy the existing handlers (by del
ing them from logger.root.handlers
?) and create identical new ones; you can't just re-call basicConfig
because of the issue that @dave-mankoff pointed out.
这篇关于在Python守护进程中维护日志记录和/或stdout/stderr的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!