Python子进程在接收标准输入EOF时遇到神秘的延迟 [英] Python subprocesses experience mysterious delay in receiving stdin EOF

查看:96
本文介绍了Python子进程在接收标准输入EOF时遇到神秘的延迟的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将我在应用程序中看到的问题简化为以下测试用例.在此代码中,父进程同时生成2个(您可以生成更多)子进程,这些子进程通过stdin从父进程读取一条大消息,休眠5秒钟,然后写回一些东西.但是,某处发生了意外的等待,导致代码在10秒内完成,而不是预期的5秒.

I reduced a problem I was seeing in my application down into the following test case. In this code, a parent process concurrently spawns 2 (you can spawn more) subprocesses that read a big message from the parent over stdin, sleep for 5 seconds, and write something back. However, there's unexpected waiting happening somewhere, causing the code to complete in 10 seconds instead of the expected 5.

如果设置了verbose=True,则可以看到散乱的子进程正在接收大多数消息,然后等待最后3个字符的块-这并没有检测到管道已关闭.此外,如果我对第二个进程(doreturn=True)不做任何事情,则第一个进程将从不看到EOF.

If you set verbose=True, you can see that the straggling subprocess is receiving most of the messages, then waiting for the last chunk of 3 chars---it's not detecting that the pipe has been closed. Furthermore, if I simply don't do anything with the second process (doreturn=True), the first process will never see the EOF.

有什么想法吗?再往下是一些示例输出.预先感谢.

Any ideas what's happening? Further down is some example output. Thanks in advance.

from subprocess import *
from threading import *
from time import *
from traceback import *
import sys
verbose = False
doreturn = False
msg = (20*4096+3)*'a'
def elapsed(): return '%7.3f' % (time() - start)
if sys.argv[1:]:
  start = float(sys.argv[2])
  if verbose:
    for chunk in iter(lambda: sys.stdin.read(4096), ''):
      print >> sys.stderr, '..', time(), sys.argv[1], 'read', len(chunk)
  else:
    sys.stdin.read()
  print >> sys.stderr, elapsed(), '..', sys.argv[1], 'done reading'
  sleep(5)
  print msg
else:
  start = time()
  def go(i):
    print elapsed(), i, 'starting'
    p = Popen(['python','stuckproc.py',str(i), str(start)], stdin=PIPE, stdout=PIPE)
    if doreturn and i == 1: return
    print elapsed(), i, 'writing'
    p.stdin.write(msg)
    print elapsed(), i, 'closing'
    p.stdin.close()
    print elapsed(), i, 'reading'
    p.stdout.read()
    print elapsed(), i, 'done'
  ts = [Thread(target=go, args=(i,)) for i in xrange(2)]
  for t in ts: t.start()
  for t in ts: t.join()

示例输出:

  0.001 0 starting
  0.003 1 starting
  0.005 0 writing
  0.016 1 writing
  0.093 0 closing
  0.093 0 reading
  0.094 1 closing
  0.094 1 reading
  0.098 .. 1 done reading
  5.103 1 done
  5.108 .. 0 done reading
 10.113 0 done

如果使用Python 2.6.5,那会有所不同.

I'm using Python 2.6.5 if that makes a difference.

推荐答案

经过太多时间后,在

After way too much time, I figured it out, after a quote from this post jumped out at me:

请参见pipe(7)的管道和FIFO上的I/O"部分("man 7管道")

See the "I/O on Pipes and FIFOs" section of pipe(7) ("man 7 pipe")

如果所有引用管道写端的文件描述符都具有 已关闭,则尝试从管道读取(2)将会看到 文件结尾(read(2)将返回0)."

"If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from the pipe will see end-of-file (read(2) will return 0)."

我应该知道这一点,但对我而言从来没有发生过-与Python无关.发生的事情是:子进程被打开的(写入器)文件描述符分叉到彼此的管道.只要管道中有开放的编写器文件描述符,读者就看不到EOF.

I should've known this, but it never occurred to me - had nothing to do with Python in particular. What was happening was: the subprocesses were getting forked with open (writer) file descriptors to each others' pipes. As long as there are open writer file descriptors to a pipe, readers won't see EOF.

例如:

p1=Popen(..., stdin=PIPE, ...) # creates a pipe the parent process can write to
p2=Popen(...) # inherits the writer FD - as long as p2 exists, p1 won't see EOF

结果是Popen有一个close_fds参数,因此解决方案是传递close_fds=True.事后看来,这一切都很简单明了,但仍然至少花费了不少时间.

Turns out there's a close_fds parameter to Popen, so the solution is to pass close_fds=True. All simple and obvious in hindsight, but still managed to cost at least a couple eyeballs good chunks of time.

这篇关于Python子进程在接收标准输入EOF时遇到神秘的延迟的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆