python中打开的文件太多 [英] Too many open files in python

查看:173
本文介绍了python中打开的文件太多的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一种测试套件,它是大量文件密集型的.一段时间(2小时)后,我得到一个IOError: [Errno 24] Too many open files: '/tmp/tmpxsqYPm'.我仔细检查了所有文件句柄是否再次关闭它们.但是错误仍然存​​在.

I wrote kind of a test suite which is heavily file intensive. After some time (2h) I get an IOError: [Errno 24] Too many open files: '/tmp/tmpxsqYPm'. I double checked all file handles whether I close them again. But the error still exists.

我尝试使用resource.RLIMIT_NOFILE找出允许的文件描述符数量和当前打开的文件描述符数量:

I tried to figure out the number of allowed file descriptors using resource.RLIMIT_NOFILE and the number of currently opened file desciptors:

def get_open_fds():

    fds = []
    for fd in range(3,resource.RLIMIT_NOFILE):
            try:
                    flags = fcntl.fcntl(fd, fcntl.F_GETFD)
            except IOError:
                    continue

            fds.append(fd)

    return fds

因此,如果我运行以下测试:

So if I run the following test:

print get_open_fds()
for i in range(0,100):
    f = open("/tmp/test_%i" % i, "w")
    f.write("test")
    print get_open_fds()

我得到以下输出:

[]
/tmp/test_0
[3]
/tmp/test_1
[4]
/tmp/test_2
[3]
/tmp/test_3
[4]
/tmp/test_4
[3]
/tmp/test_5
[4] ...

这很奇怪,我期望打开的文件描述符会越来越多.我的脚本正确吗?

That's strange, I expected an increasing number of opened file descriptors. Is my script correct?

我正在使用python的记录器和子进程.那是我FD泄漏的原因吗?

I'm using python's logger and subprocess. Could that be the reason for my fd leak?

谢谢, 丹尼尔

推荐答案

您的测试脚本在每次迭代时都会覆盖f,这意味着该文件每次都会关闭.记录到文件和使用管道的subprocess都使用完描述符,这可能会导致耗尽.

Your test script overwrites f each iteration, which means that the file will get closed each time. Both logging to files and subprocess with pipes use up descriptors, which can lead to exhaustion.

这篇关于python中打开的文件太多的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆