subprocess.Popen.stdout-实时(再次)读取标准输出 [英] subprocess.Popen.stdout - reading stdout in real-time (again)

查看:880
本文介绍了subprocess.Popen.stdout-实时(再次)读取标准输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

同样,同样的问题.
原因是-阅读以下内容后,我仍然无法使它工作:

我的情况是,我有一个用C编写的控制台应用程序,让我们以一个循环为例:

tmp = 0.0;   
printf("\ninput>>"); 
scanf_s("%f",&tmp); 
printf ("\ninput was: %f",tmp); 

它连续读取一些输入并写入一些输出.

我与之交互的python代码如下:

p=subprocess.Popen([path],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
p.stdin.write('12345\n')
for line in p.stdout: 
    print(">>> " + str(line.rstrip())) 
    p.stdout.flush() 

到目前为止,每当我阅读p.stdout表格时,它总是等待直到进程终止,然后输出一个空字符串.我已经尝试了很多东西-但结果还是一样.

我尝试使用Python 2.6和3.1,但版本无关紧要-我只需要使其在某处工作即可.

解决方案

由于在两个方向上都进行了默认缓冲,因此尝试对子进程的管道进行读写非常困难.在一个进程或另一个进程(父进程或子进程)正在从空缓冲区读取,写入完整缓冲区或对正在等待数据的缓冲区进行阻塞读取(在系统库刷新数据之前等待该数据),这很容易导致死锁. >

对于更少量的数据,Popen.communicate()方法可能就足够了.但是,对于超出其缓冲区的数据,您可能会陷入停滞的进程(类似于您已经看到的内容?)

您可能想查找有关使用fcntl模块并使一个或另一个(或两个)文件描述符不阻塞的详细信息.当然,在那种情况下,您必须在适当的异常处理中包装对这些文件描述符的所有读取和/或写入,以处理"EWOULDBLOCK"事件. (我不记得为这些引发的确切Python异常).

一种完全不同的方法是让您的父级使用select模块和os.fork() ...,以及让子进程在直接处理任何文件dup()之后使用execve()目标程序. (基本上,您将成为Popen()的重新实现部分,但具有不同的父文件描述符(PIPE)处理.

顺便说一句,至少在Python的2.5和2.6标准库中,.communicate只能处理大约64K的远程数据(在Linux和FreeBSD上).这个数字可能会因各种因素而有所不同(可能包括用于编译Python解释器的构建选项,或链接到该解释器的libc版本).它不仅受可用内存的限制(尽管塞巴斯蒂安(J.F. Sebastian)断言相反),但限制为一个较小的值.

Again, the same question.
The reason is - I still can't make it work after reading the following:

My case is that I have a console app written in C, lets take for example this code in a loop:

tmp = 0.0;   
printf("\ninput>>"); 
scanf_s("%f",&tmp); 
printf ("\ninput was: %f",tmp); 

It continuously reads some input and writes some output.

My python code to interact with it is the following:

p=subprocess.Popen([path],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
p.stdin.write('12345\n')
for line in p.stdout: 
    print(">>> " + str(line.rstrip())) 
    p.stdout.flush() 

So far whenever I read form p.stdout it always waits until the process is terminated and then outputs an empty string. I've tried lots of stuff - but still the same result.

I tried Python 2.6 and 3.1, but the version doesn't matter - I just need to make it work somewhere.

解决方案

Trying to write to and read from pipes to a sub-process is tricky because of the default buffering going on in both directions. It's extremely easy to get a deadlock where one or the other process (parent or child) is reading from an empty buffer, writing into a full buffer or doing a blocking read on a buffer that's awaiting data before the system libraries flush it.

For more modest amounts of data the Popen.communicate() method might be sufficient. However, for data that exceeds its buffering you'd probably get stalled processes (similar to what you're already seeing?)

You might want to look for details on using the fcntl module and making one or the other (or both) of your file descriptors non-blocking. In that case, of course, you'll have to wrap all reads and/or writes to those file descriptors in the appropriate exception handling to handle the "EWOULDBLOCK" events. (I don't remember the exact Python exception that's raised for these).

A completely different approach would be for your parent to use the select module and os.fork() ... and for the child process to execve() the target program after directly handling any file dup()ing. (Basically you'd be re-implement parts of Popen() but with different parent file descriptor (PIPE) handling.

Incidentally, .communicate, at least in Python's 2.5 and 2.6 standard libraries, will only handle about 64K of remote data (on Linux and FreeBSD). This number may vary based on various factors (possibly including the build options used to compile your Python interpreter, or the version of libc being linked to it). It is NOT simply limited by available memory (despite J.F. Sebastian's assertion to the contrary) but is limited to a much smaller value.

这篇关于subprocess.Popen.stdout-实时(再次)读取标准输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆