使用iterparse()解析大型XML会占用过多内存.还有其他选择吗? [英] Parsing large XML using iterparse() consumes too much memory. Any alternative?

查看:207
本文介绍了使用iterparse()解析大型XML会占用过多内存.还有其他选择吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在将python 2.7与最新的lxml库一起使用.我正在解析具有非常同质的结构和数百万个元素的大型XML文件.我以为lxml的iterparse不会在解析时建立内部树,但是显然这样做是因为内存使用率不断增长直到崩溃(大约1GB).有没有一种方法可以使用lxml解析大型XML文件,而无需占用大量内存?

I am using python 2.7 with latest lxml library. I am parsing a large XML file with very homogenous structure and millions of elements. I thought lxml's iterparse would not build an internal tree while it parses, but apparently it does since memory usage grows until it crashes (around 1GB). Is there a way to parse large XML file using lxml without using a lot of memory?

我认为目标解析器界面是一种可能,但我我不确定这是否会更好.

I saw the target parser interface as one possibility, but I'm not sure if that will work any better.

推荐答案

尝试使用Liza Daly的

Try using Liza Daly's fast_iter:

def fast_iter(context, func, args=[], kwargs={}):
    # http://www.ibm.com/developerworks/xml/library/x-hiperfparse/
    # Author: Liza Daly
    for event, elem in context:
        func(elem, *args, **kwargs)
        elem.clear()
        while elem.getprevious() is not None:
            del elem.getparent()[0]
    del context

fast_iter在解析它们之后从树中删除它们,以及不再需要的先前元素(可能带有其他标签).

fast_iter removes elements from the tree after they have been parsed, and also previous elements (maybe with other tags) that are no longer needed.

它可以这样使用:

import lxml.etree as ET
def process_element(elem):
    ...
context=ET.iterparse(filename, events=('end',), tag=...)        
fast_iter(context, process_element)

这篇关于使用iterparse()解析大型XML会占用过多内存.还有其他选择吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆