使用日志记录多进程/多线程python脚本的死锁 [英] Deadlock with logging multiprocess/multithread python script

查看:222
本文介绍了使用日志记录多进程/多线程python脚本的死锁的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

从以下脚本收集日志时,我遇到了问题. 一旦将SLEEP_TIME设置为太小的值,LoggingThread 线程以某种方式阻塞了日志模块.脚本在记录请求时冻结 在action函数中.如果SLEEP_TIME约为0.1,则脚本收集 如我所料,所有日志消息.

I am facing the problem with collecting logs from the following script. Once I set up the SLEEP_TIME to too "small" value, the LoggingThread threads somehow block the logging module. The script freeze on logging request in the action function. If the SLEEP_TIME is about 0.1 the script collect all log messages as I expect.

我试图遵循此答案,但它不能解决我的问题.

I tried to follow this answer but it does not solve my problem.

import multiprocessing
import threading
import logging
import time

SLEEP_TIME = 0.000001

logger = logging.getLogger()

ch = logging.StreamHandler()
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(funcName)s(): %(message)s'))
ch.setLevel(logging.DEBUG)

logger.setLevel(logging.DEBUG)
logger.addHandler(ch)


class LoggingThread(threading.Thread):

    def __init__(self):
        threading.Thread.__init__(self)

    def run(self):
        while True:
            logger.debug('LoggingThread: {}'.format(self))
            time.sleep(SLEEP_TIME)


def action(i):
    logger.debug('action: {}'.format(i))


def do_parallel_job():

    processes = multiprocessing.cpu_count()
    pool = multiprocessing.Pool(processes=processes)
    for i in range(20):
        pool.apply_async(action, args=(i,))
    pool.close()
    pool.join()



if __name__ == '__main__':

    logger.debug('START')

    #
    # multithread part
    #
    for _ in range(10):
        lt = LoggingThread()
        lt.setDaemon(True)
        lt.start()

    #
    # multiprocess part
    #
    do_parallel_job()

    logger.debug('FINISH')

如何在多进程和多线程脚本中使用日志记录模块?

How to use logging module in multiprocess and multithread scripts?

推荐答案

这可能是错误6721 .

在有锁,线程和叉子的任何情况下,此问题都很常见.如果在线程2调用fork时线程1拥有锁,则在派生过程中,将只有线程2,并且该锁将永远保持不变.您的情况就是logging.StreamHandler.lock.

The problem is common in any situation where you have locks, threads and forks. If thread 1 had a lock while thread 2 calls fork, in the forked process, there will only be thread 2 and the lock will be held forever. In your case, that is logging.StreamHandler.lock.

此处 logging模块.请注意,您还需要注意其他所有锁.

A fix can be found here for the logging module. Note that you need to take care of any other locks, too.

这篇关于使用日志记录多进程/多线程python脚本的死锁的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆