subprocess.Popen.stdout - 实时读取标准输出(再次) [英] subprocess.Popen.stdout - reading stdout in real-time (again)

查看:26
本文介绍了subprocess.Popen.stdout - 实时读取标准输出(再次)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

同样的问题.
原因是 - 阅读以下内容后我仍然无法使其工作:

Again, the same question.
The reason is - I still can't make it work after reading the following:

我的情况是我有一个用 C 编写的控制台应用程序,让我们以循环中的代码为例:

My case is that I have a console app written in C, lets take for example this code in a loop:

tmp = 0.0;   
printf("
input>>"); 
scanf_s("%f",&tmp); 
printf ("
input was: %f",tmp); 

它不断地读取一些输入并写入一些输出.

It continuously reads some input and writes some output.

我与之交互的python代码如下:

My python code to interact with it is the following:

p=subprocess.Popen([path],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
p.stdin.write('12345
')
for line in p.stdout: 
    print(">>> " + str(line.rstrip())) 
    p.stdout.flush() 

到目前为止,每当我阅读表单 p.stdout 时,它总是等到进程终止,然后输出一个空字符串.我已经尝试了很多东西 - 但仍然是相同的结果.

So far whenever I read form p.stdout it always waits until the process is terminated and then outputs an empty string. I've tried lots of stuff - but still the same result.

我尝试了 Python 2.6 和 3.1,但版本无关紧要 - 我只需要让它在某个地方工作.

I tried Python 2.6 and 3.1, but the version doesn't matter - I just need to make it work somewhere.

推荐答案

尝试在管道中写入和读取到子进程是很棘手的,因为默认的缓冲是双向进行的.当一个或另一个进程(父进程或子进程)正在从空缓冲区读取、写入完整缓冲区或在系统库刷新数据之前对等待数据的缓冲区进行阻塞读取时,很容易出现死锁.

Trying to write to and read from pipes to a sub-process is tricky because of the default buffering going on in both directions. It's extremely easy to get a deadlock where one or the other process (parent or child) is reading from an empty buffer, writing into a full buffer or doing a blocking read on a buffer that's awaiting data before the system libraries flush it.

对于更少量的数据,Popen.communicate() 方法可能就足够了.但是,对于超出其缓冲的数据,您可能会遇到停滞的进程(类似于您已经看到的?)

For more modest amounts of data the Popen.communicate() method might be sufficient. However, for data that exceeds its buffering you'd probably get stalled processes (similar to what you're already seeing?)

您可能想了解有关使用 fcntl 模块并使文件描述符中的一个或另一个(或两者)非阻塞的详细信息.当然,在这种情况下,您必须将所有对这些文件描述符的读取和/或写入包装在适当的异常处理中,以处理EWOULDBLOCK"事件.(我不记得为这些引发的确切 Python 异常).

You might want to look for details on using the fcntl module and making one or the other (or both) of your file descriptors non-blocking. In that case, of course, you'll have to wrap all reads and/or writes to those file descriptors in the appropriate exception handling to handle the "EWOULDBLOCK" events. (I don't remember the exact Python exception that's raised for these).

一个完全不同的方法是让你的父母使用 select 模块和 os.fork() ... 并且让子进程 execve() 直接处理任何文件 dup() 之后的目标程序.(基本上,您将重新实现 Popen() 的部分,但使用不同的父文件描述符 (PIPE) 处理.

A completely different approach would be for your parent to use the select module and os.fork() ... and for the child process to execve() the target program after directly handling any file dup()ing. (Basically you'd be re-implement parts of Popen() but with different parent file descriptor (PIPE) handling.

顺便说一句,.communicate,至少在 Python 的 2.5 和 2.6 标准库中,只能处理大约 64K 的远程数据(在 Linux 和 FreeBSD 上).这个数字可能会因各种因素而异(可能包括用于编译 Python 解释器的构建选项,或者链接到它的 libc 版本).它不仅仅受限于可用内存(尽管 J.F. Sebastian 断言相反),而是受限于更小的值.

Incidentally, .communicate, at least in Python's 2.5 and 2.6 standard libraries, will only handle about 64K of remote data (on Linux and FreeBSD). This number may vary based on various factors (possibly including the build options used to compile your Python interpreter, or the version of libc being linked to it). It is NOT simply limited by available memory (despite J.F. Sebastian's assertion to the contrary) but is limited to a much smaller value.

这篇关于subprocess.Popen.stdout - 实时读取标准输出(再次)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆