在python中解析大型(〜40GB)XML文本文件 [英] Parsing a large (~40GB) XML text file in python

查看:46
本文介绍了在python中解析大型(〜40GB)XML文本文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个要使用python解析的XML文件.最好的方法是什么?将整个文档存储到内存中将是灾难性的,我需要以某种方式一次读取一个节点.

I've got an XML file I want to parse with python. What is best way to do this? Taking into memory the entire document would be disastrous, I need to somehow read it a single node at a time.

我了解的现有XML解决方案:

Existing XML solutions I know of:

  • 元素树
  • minixml

但是由于我提到的问题,我担心他们不能完全正常工作.另外,我无法在文本编辑器中打开它-有关处理巨型文本文件的一般性技巧吗?

but I'm afraid they aren't quite going to work because of the problem I mentioned. Also I can't open it in a text editor - any good tips in generao for working with giant text files?

推荐答案

首先,您是否尝试过 ElementTree(内置的纯 Python 或 C 版本,或者更好的是 ElementTree> lxml 版本)?我敢肯定他们没有一个人实际上将整个文件读到内存中.

First, have you tried ElementTree (either the built-in pure-Python or C versions, or, better, the lxml version)? I'm pretty sure none of them actually read the whole file into memory.

当然,问题在于,无论是否将整个文件读入内存,生成的解析树最终都将存储在内存中.

The problem, of course, is that, whether or not it reads the whole file into memory, the resulting parsed tree ends up in memory.

ElementTree有一个非常漂亮的解决方案,通常很足够: iterparse .

ElementTree has a nifty solution that's pretty simple, and often sufficient: iterparse.

for event, elem in ET.iterparse(xmlfile, events=('end')):
  ...

这里的关键是您可以在树的构建过程中对其进行修改(通过用仅包含父节点所需内容的摘要替换内容).通过丢弃所有不需要的东西,可以将它们扔进内存,从而可以坚持按通常的顺序进行解析,而不会耗尽内存.

The key here is that you can modify the tree as it's built up (by replacing the contents with a summary containing only what the parent node will need). By throwing out all the stuff you don't need to keep in memory as it comes in, you can stick to parsing things in the usual order without running out of memory.

链接的页面提供了更多详细信息,包括一些在处理XML-RPC和plist时修改它们的示例.(在那种情况下,这是为了使生成的对象更易于使用,而不是节省内存,但是它们应该足以使您理解.)

The linked page gives more details, including some examples for modifying XML-RPC and plist as they're processed. (In those cases, it's to make the resulting object simpler to use, not to save memory, but they should be enough to get the idea across.)

这仅在您可以想到一种总结方式时才有用.(在最琐碎的情况下,父母不需要孩子的任何信息,这只是 elem.clear().)否则,这对您不起作用.

This only helps if you can think of a way to summarize as you go. (In the most trivial case, where the parent doesn't need any info from its children, this is just elem.clear().) Otherwise, this won't work for you.

标准解决方案是 SAX ,这是一种基于回调的API,可让您进行操作树一次成为一个节点.您无需像对iterparse一样担心会截断节点,因为在解析它们之后,这些节点就不存在了.

The standard solution is SAX, which is a callback-based API that lets you operate on the tree a node at a time. You don't need to worry about truncating nodes as you do with iterparse, because the nodes don't exist after you've parsed them.

目前,大多数最佳的SAX示例都适用于Java或Javascript,但是并不太难找到它们.例如,如果您查看 http://cs.au.dk/~amoeller/XML/programming/saxexample.html ,只要您知道在哪里可以找到

Most of the best SAX examples out there are for Java or Javascript, but they're not too hard to figure out. For example, if you look at http://cs.au.dk/~amoeller/XML/programming/saxexample.html you should be able to figure out how to write it in Python (as long as you know where to find the documentation for xml.sax).

还有一些基于DOM的库可以在不将所有内容读入内存的情况下运行,但是我所知道的任何一个我都不会相信以合理的效率处理40GB的文件.

There are also some DOM-based libraries that work without reading everything into memory, but there aren't any that I know of that I'd trust to handle a 40GB file with reasonable efficiency.

这篇关于在python中解析大型(〜40GB)XML文本文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆