如何从Android应用程序web服务解析大的XML数据? [英] How to parse huge xml data from webservice in Android application?

查看:123
本文介绍了如何从Android应用程序web服务解析大的XML数据?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新的Andr​​oid应用程序的开发。

I am new to Android application development.

我开发一个需要调用.NET Web服务和XML解析数据的应用程序。当我解析正常XML数据它与DOM解析器。但是,当XML数据是巨大的,然后它会抛出内存不足错误。

I am developing an application which needs to call a .NET webservice and parse XML data. When I parse normal XML data it works with DOM Parser. But, when XML data is huge then it throws "Out of memory error".

是否有可用的Andr​​oid任何其他解析器解析庞大的XML数据。

Is there any other parser available in Android to parse huge XML data..

请帮助我,或者建议我..

Please help me or suggest me..

我真的马上需要这个越好。

I am really need this as soon as possible.

推荐答案

使用的 SAX解析器。 SAX解析器设计用于处理巨大的XML文件。相反,在加载XML文件到内存中一气呵成的,它走在文档元素乘元素,并通知您。

Use the SAX parser. SAX parsers are designed to handle huge XML files. Instead of loading the XML file into memory in one go, it walks over the document element-by-element and notifies you.

此外,如果XML文件是非常大的,你可能还需要看看文件的加载方式。不要打开该文件,并全部内容送入SAX解析器一气呵成。相反,阅读块逐块(每次例如4KB块),并养活到SAX解析器。

Additionally, if the XML file is really big, you may also want to look at how the file is loaded. Don't open the file and feed the entire contents into the SAX parser in one go. Instead, read it chunk-by-chunk (e.g. 4Kb blocks at a time) and feed that into the SAX parser.

编辑:SAX解析器的作品非常不同,从一个DOM解析器。基本上,它只是通过一次原稿一种元素。每当找到一个打开或关闭的标签,它调用你的函数(如回调),并告诉它的标签是什么,是什么数据(如果有的话)。它在一开始并贯穿始终的推移,永不返回。这是连载。这意味着两件事情:

A SAX parser works very differently from a DOM parser. Basically, it just goes through the document one element at a time. Whenever it finds an open or close tag, it calls one of your functions (as a callback) and tells it what the tag is and what the data is (if any). It starts at the beginning and goes through to the end and never goes back. It's serial. This means two things:


  • 更​​多code。您的回调需要确定在遇到某些标记时做什么,应该忽略什么标签,等等。 SAX解析器不回去,所以如果你需要记住的东西后,你需要做的一切你自己。所以,是的,这将是更多的工作来处理包含许多不同的标记许多API。

  • More code. Your callback needs to determine what to do when certain tags are encountered, what tags should be skipped, etcetera. A SAX parser doesn't go back, so if you need to remember anything for later, you need to do that all yourself. So yeah, it will be more work to deal with many APIs containing many different tags.

这可以解析XML部分。它并不关心你养活,如果只是第4 KB的XML文件。它不会产生一个错误,而只是要求另一块数据时,它的完成。只有当它遇到一个不匹配的结束标记(或者你停得太早喂养它的数据),它会产生一个错误。

It can parse partial XML. It doesn't care that you feed if just the first 4 Kb of an XML file. It will not generate an error but simply ask for another chunk data when it's done. Only when it encounters a mismatched closing tag (or you stop feeding it data too soon) will it generate an error.

所以,是的,这是更多的工作。但回报是更大的速度,没有问题的解析巨大的文件将不适合到内存中。

So yeah, it's more work. But the payoff is much greater speed and no problem parsing huge files that would not fit into memory.

这篇关于如何从Android应用程序web服务解析大的XML数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆