python子进程不会像终端那样交错stderr和stdout [英] python subprocess won't interleave stderr and stdout as what terminal does

查看:27
本文介绍了python子进程不会像终端那样交错stderr和stdout的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

一个测试程序

#!/usr/bin/env python3

import sys

count = 0
sys.stderr.write('stderr, order %d\n' % count)
count += 1
sys.stdout.write('stdout, order %d\n' % count)
count += 1
sys.stderr.write('stderr, order %d\n' % count)
count += 1
sys.stdout.write('stdout, order %d\n' % count)

当通过终端调用时,预期的输出是,

when invoked through the terminal, the expected output is,

stderr, order 0
stdout, order 1
stderr, order 2
stdout, order 3

在交互式 shell 中,当我将 stdout 重定向到 PIPE 时,输出顺序与上面的输出不同,其中 Popen 会将 stderr 分组并将它们全部写入并然后对 stdout 做同样的事情,而不是交错 stdout 和 stderr.

In the interactive shell, when I redirect stdout to a PIPE, the output order is different from the output above, where Popen would group stderr and write them all and then do the same thing to stdout, instead of interleaving stdout and stderr.

In [29]: a = sp.run(['./test.py'], stderr=sp.STDOUT)
stderr, order 0
stdout, order 1
stderr, order 2
stdout, order 3

In [30]: a
Out[30]: CompletedProcess(args=['./test.py'], returncode=0)

In [33]: b = sp.Popen(['./test.py'], stderr=sp.STDOUT, stdout=sp.PIPE, encoding='utf-8')

In [34]: print(b.communicate()[0])
stderr, order 0
stderr, order 2
stdout, order 1
stdout, order 3

推荐答案

在 C 库(以及基于 C 的 Python)中,流的处理方式取决于它们是否附加到交互式终端(或假装的东西)或不.对于 ttystdout 是行缓冲的,否则它的块被缓冲并且只有在遇到某个块边界时才刷新到文件描述符.当您重定向到 PIPE 时,流不再是 tty 并且块缓冲生效.

In the C libraries (and thus c-based python), streams are handled differently depending on whether they are attached to interactive terminals (or something pretending to be) or not. For a tty, stdout is line buffered, otherwise its block buffered and only flushed to the file descriptor when some block boundary is hit. When you redirected to a PIPE, the stream is no longer a tty and block buffering is in effect.

解决方案是重新打开 stdout,指定无论如何都需要行缓冲 (1).在 C 级别,stderr 总是行缓冲,但是当我测试只是重新打开 stdout 时,程序表现得好像 stderr 是块缓冲的.我很惊讶.也许这是中间的 io.TextIO 层或其他一些奇怪的东西,但我发现我需要修复两个管道.

The solution is to reopen stdout specifying that you want line buffering (1) regardless. At the C level, stderr is always line buffered but when I tested just reopening stdout the program acted as though stderr is block buffered. I was quite surprised. Maybe this is the intermediate io.TextIO layer or some other odd thing, but I found I needed to fix both pipes.

即使 stdoutstderr 进入同一个管道,但就执行的程序而言,它们是具有单独缓冲区的单独文件描述符.这就是为什么即使在块模式下也不会在输出缓冲区中自然发生交错的原因.

Even though stdout and stderr go to the same pipe, they are separate file descriptors with separate buffers as far as the executed program is concerned. That's why interleaving doesn't happen naturally in the output buffer even in block mode.

#!/usr/bin/env python3

import sys
import os

# reopen stdout line buffered
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1)

# this surprises me, seems like we have to reopen stderr
# line buffered, but i thought it was line buffered anywy.
# perhaps its the intermediate python TextIO layer?
sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 1)

count = 0
sys.stderr.write('stderr, order %d\n' % count)
count += 1
sys.stdout.write('stdout, order %d\n' % count)
count += 1
sys.stderr.write('stderr, order %d\n' % count)
count += 1
sys.stdout.write('stdout, order %d\n' % count)

这篇关于python子进程不会像终端那样交错stderr和stdout的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆