我该如何继续读络绎不绝用宽容失败 [英] How do I keep reading an endless stream with fail tolerance

查看:166
本文介绍了我该如何继续读络绎不绝用宽容失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前我工作的一个应用程序,从Twitter的API读取数据流并将其解析为对象。
目前,我读了流和使用的readObject从DataContractJsonSerializer让我的对象。

Currently i'm working on an app that reads the stream from a Twitter api and parses it into objects. At the moment I read the stream and use ReadObject from DataContractJsonSerializer to make my objects.

这个伟大的工程!

但是:
我有点担心,在关闭的机会,我的程序与流赶上(上网速度变慢或W / E),并没有足够的数据来分析,会发生什么。 ..The方法可能会抛出异常,但我想等待新的数据,然后重试同一个对象,然后继续。

HOWEVER: I'm kind of worried what would happen in the off chance my program catches up with the stream (internet slows down or w/e) and there is not enough data to parse...The method will probably throw an exception, but i want to wait for new data and then retry the same object and continue.

此外,我想知道我怎么可以使该方法更加稳定,万一损坏的数据将进入流或像这样

Also i was wondering how i could make the method more stable, in case corrupt data would enter the stream or something like this.

在预先的任何答案/想法感谢:)

Thanks in advance for any answers/ideas:)

推荐答案

如果你的程序与Twitter的饲料赶上,在DCJS应该只是阻止,直到你得到足够的数据来完成阅读。这不是一个正常的问题,因为数据流的目的是隐藏他们的读者的等待时间。

If your program catches up with the twitter feed, the DCJS should just block until you get enough data to complete reading. That isn't normally a concern, because streams are designed to hide latency from their readers.

更可能的是,你不追赶,而是继续下跌身后,直到你耗尽内存,抛出OOME,程序会崩溃。

Much more likely is that you won't catch up, but rather keep falling behind until you run out of memory, throw an OOME, and the program will crash.

我建议,而不是试图解析动态流,写为类似的文件,并从阅读(甚至并行,采用滚动文件或东西)。

I'd suggest rather than trying to parse the stream on the fly, write it to something like a file and read from that (maybe even in parallel, using rolling files or something).

这篇关于我该如何继续读络绎不绝用宽容失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆