如何使子进程仅通信错误 [英] How to make subprocess only communicate error

查看:35
本文介绍了如何使子进程仅通信错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们创建了一个在许多项目中使用的商品函数,它使用子进程来启动命令.该函数如下:

def _popen( command_list ):p = subprocess.Popen( command_list, stdout=subprocess.PIPE,stderr=subprocess.PIPE )出,error_msg = p.communicate()# 一些进程(例如 system_start)在 stderr 中打印一些点# 即使没有错误发生.如果 error_msg.strip('.') == '':error_msg = ''返回,error_msg

对于大多数进程,这按预期工作.

但是现在我必须将它与后台进程一起使用,只要我的 python 脚本也在运行,它就需要继续运行,因此现在开​​始有趣了;-)
注意:脚本还需要使用相同的 _popen 函数启动其他非后台进程.

我知道通过跳过 p.communicate 我可以让进程在后台启动,而我的 python 脚本继续.
但这有两个问题:

  1. 我需要检查后台进程是否正确启动
  2. 在主进程运行时,我需要不时检查后台进程的 stdout 和 stderr,而不需要停止进程/结束挂在后台进程中.

检查后台进程是否正确启动
对于 1,我目前调整了 _popen 版本以采用额外的参数skip_com"(默认为 False)来跳过 p.communicate 调用.在这种情况下,我返回 p 对象 i.s.o.out 和 error_msg.这样我就可以在启动后直接检查进程是否正在运行,如果没有调用 p-object 上的通信来检查 error_msg 是什么.

MY_COMMAND_LIST = [ "<应该进入后台的命令>"]def _popen( command_list, skip_com=False ):p = subprocess.Popen( command_list, stdout=subprocess.PIPE,stderr=subprocess.PIPE )如果不是skip_com:出,error_msg = p.communicate()# 一些进程(例如 system_start)在 stderr 中打印一些点# 即使没有错误发生.如果 error_msg.strip('.') == '':error_msg = ''返回,error_msg别的:返回 p...p = _popen( MY_COMMAND_LIST, True )error = _get_command_pid( MY_COMMAND_LIST ) # 使用 _popen 和 ps -ef 检查后台命令是否正在运行如果错误:_, error_msg = p.communicate()

我不知道是否有更好的方法来做到这一点.

检查标准输出/标准错误
对于 2,我还没有找到不会导致脚本等待后台进程结束的解决方案.
我知道的唯一交流方式是在例如使用 iterp.stdout.readline.但如果进程仍在运行,这将挂起:

 for line in iter( p.stdout.readline, "" ): 打印行

有人知道如何做到这一点吗?

/edit/ 我需要分别检查从 stdout 和 stderr 获得的数据.在这种情况下,stderr 尤其重要,因为如果后台进程遇到错误,它将退出,我需要在主程序中捕获该错误,以防止由该退出引起的错误.

在某些情况下需要 stdout 输出来检查后台进程中的预期行为并对此做出反应.

解决方案

更新

<块引用>

子进程遇到错误会真正退出

如果您不需要读取输出来检测错误,则将其重定向到DEVNULL 并调用 .poll() 来检查子进程的状态不时停止进程.

<小时>

假设您必须读取输出:

除非您从管道中读取数据,否则不要使用 stdout=PIPE, stderr=PIPE. 否则,子进程可能会在任何相应的操作系统管道中挂起缓冲区填满.

如果您想启动一个进程并在它运行时做其他事情,那么您需要一种非阻塞方式来读取其输出.一种简单的可移植方式是使用线程:

def process_output(process):with Finishing(process): # 关闭管道,调用 .wait()对于迭代器中的行(process.stdout.readline, b''):如果检测到错误(行):通信错误(进程,行)process = Popen(command, stdout=PIPE, stderr=STDOUT, bufsize=1)线程(目标=进程输出,参数=[进程]).开始()

<块引用>

我需要分别检查我从 stdout 和 stderr 获得的数据.

使用两个线程:

def read_stdout(process):with waiting(process), process.stdout: # 关闭管道,调用 .wait()对于迭代器中的行(process.stdout.readline, b''):do_something_with_stdout(行)def read_stderr(进程):使用 process.stderr:对于迭代器中的行(process.stderr.readline, b''):如果检测到错误(行):通信错误(进程,行)process = Popen(command, stdout=PIPE, stderr=PIPE, bufsize=1)线程(目标=read_stdout,args=[进程]).开始()线程(目标=read_stderr,args=[进程]).开始()

您可以将代码放入自定义类中(对do_something_with_stdout()detected_error()communicate_error() 方法进行分组).

We have created a commodity function used in many projects which uses subprocess to start a command. This function is as follows:

def _popen( command_list ):
    p = subprocess.Popen( command_list, stdout=subprocess.PIPE,
        stderr=subprocess.PIPE )

    out, error_msg = p.communicate()

    # Some processes (e.g. system_start) print a number of dots in stderr
    # even when no error occurs.
    if error_msg.strip('.') == '':
        error_msg = ''

    return out, error_msg

For most processes this works as intended.

But now I have to use it with a background-process which need to keep running as long as my python-script is running as well and thus now the fun starts ;-).
Note: the script also needs to start other non background-processes using this same _popen-function.

I know that by skipping p.communicate I can make the process start in the background, while my python script continues.
But there are 2 problems with this:

  1. I need to check that the background process started correctly
  2. While the main process is running I need to check the stdout and stderr of the background process from time to time without stopping the process / ending hanging in the background process.

Check background process started correctly
For 1 I currently adapted the _popen version to take an extra parameter 'skip_com' (default False) to skip the p.communicate call. And in that case I return the p-object i.s.o. out and error_msg. This so I can check if the process is running directly after starting it up and if not call communicate on the p-object to check what the error_msg is.

MY_COMMAND_LIST = [ "<command that should go to background>" ]

def _popen( command_list, skip_com=False ):    
    p = subprocess.Popen( command_list, stdout=subprocess.PIPE,
        stderr=subprocess.PIPE )

    if not skip_com:
        out, error_msg = p.communicate()

        # Some processes (e.g. system_start) print a number of dots in stderr
        # even when no error occurs.
        if error_msg.strip('.') == '':
            error_msg = ''

        return out, error_msg
    else:
        return p

...
p = _popen( MY_COMMAND_LIST, True )
error = _get_command_pid( MY_COMMAND_LIST ) # checks if background command is running using _popen and ps -ef
if error:
    _, error_msg = p.communicate()

I do not know if there is a better way to do this.

check stdout / stderr
For 2 I have not found a solution which does not cause the script to wait for the end of the background process.
The only ways I know to communicate is using iter on e.g. p.stdout.readline. But that will hang if the process is still running:

for line in iter( p.stdout.readline, "" ): print line

Any one an idea how to do this?

/edit/ I need to check the data I get from stdout and stderr seperately. Especially stderr is important in this case since if the background process encounters an error it will exit and I need to catch that in my main program to be able to prevent errors caused by that exit.

The stdout output is needed in some situations to check the expected behaviour in the background process and to react on that.

解决方案

Update

The subprocess will actually exit if it encounters an error

If you don't need to read the output to detect an error then redirect it to DEVNULL and call .poll() to check child process' status from time to time without stopping the process.


assuming you have to read the output:

Do not use stdout=PIPE, stderr=PIPE unless you read from the pipes. Otherwise, the child process may hang as soon as any of the corresponding OS pipe buffers fill up.

If you want to start a process and do something else while it is running then you need a non-blocking way to read its output. A simple portable way is to use a thread:

def process_output(process):
    with finishing(process): # close pipes, call .wait()
        for line in iter(process.stdout.readline, b''):
            if detected_error(line):
                communicate_error(process, line) 


process = Popen(command, stdout=PIPE, stderr=STDOUT, bufsize=1)
Thread(target=process_output, args=[process]).start()

I need to check the data I get from stdout and stderr seperately.

Use two threads:

def read_stdout(process):
    with waiting(process), process.stdout: # close pipe, call .wait()
        for line in iter(process.stdout.readline, b''):
            do_something_with_stdout(line)

def read_stderr(process):
    with process.stderr:
        for line in iter(process.stderr.readline, b''):
            if detected_error(line):
                communicate_error(process, line) 

process = Popen(command, stdout=PIPE, stderr=PIPE, bufsize=1)
Thread(target=read_stdout, args=[process]).start()
Thread(target=read_stderr, args=[process]).start()

You could put the code into a custom class (to group do_something_with_stdout(), detected_error(), communicate_error() methods).

这篇关于如何使子进程仅通信错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆