StreamReader.Close()响应很慢 - 请帮忙 [英] StreamReader.Close() response is very slow - please help

查看:122
本文介绍了StreamReader.Close()响应很慢 - 请帮忙的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

全部,


我有兴趣阅读网页文本并解析它。

搜索这个新组后,我决定使用以下:


*********************************开头的代码** **********************

String sTemp =" http://cgi3.igl.net/cgi-bin/ladder /teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1" ;;


WebRequest myWebRequest = WebRequest.Create(sTemp);

WebResponse myWebResponse = myWebRequest.GetResponse();

流myStream = myWebResponse.GetResponseStream();


//默认编码为utf-8

StreamReader SR =新的StreamReader(myStream);


Char [] buffer = new Char [2048];


//一次读取256个字符。

int count = SR.Read(buffer,0,2000);


// while(count> 0)

// {

//做一些处理 - 可以阅读全部或部分

// count = SR.Read(b uffer,0,2000);

//}


SR.Close(); //释放资源

myWebResponse.Close();

*********************** ********代码截止日期************************


此代码应该看起来非常熟悉,因为它遍布

新闻组和微软支持帮助页面。


网页上有一张大表,需要一段时间下载

(即使使用有线调制解调器)。


我观察的是以下内容。如果我打开并读取所有数据

(即

直到count> 0失败,那么踩过SR.Close()执行时间是

立即。如果我只读了2000字节,如上例所示,当

我跨过SR.Close()时需要很长时间(对我来说大约10-15

秒)。这可能是巧合,但似乎需要花费相同的时间,就好像我正在阅读所有数据一样。此时

我开始相信SR.Close()不会中止阅读,直到

整个网页都已收到。这是不可取的。

其实我解析了数据和终止加载的愿望,因为

整个过程是如此缓慢而且并非一直都没有必要。


有谁知道如何终止加载这个页面所以我可以消除延迟吗?我已经用CFC用MFC实现了这个使用

CInternetSession.OpenURL()并且没有这个问题。


提前致谢。


Todd

解决方案



也许你应该参加一些编程课程。


萨米
www.capehill.net


***通过Developersdex发送 http://www.developersdex.com ***

不要只是参加USENET ......获得奖励!

Sami Vaaraniemi写道:

也许你应该参加一些编程课程。




嘿!这是一个傲慢的垃圾邮件发送者:-P


Joerg,感谢您抽出宝贵时间回复我的求助请求。


对此评论我很抱歉。我相信256字节的评论

属于我复制代码的原始示例。 2000

字节缓冲区是我在程序中使用的实际大小来自

我导出示例代码来说明我的问题

newsgroup。


我没注意到整个数据页面都没有下载。那个

是个不错的选择。我使用Studio 6 / C ++编写的代码没有问题(参见原帖)。


我能够使用<实现异步方法br />
WebResponse.BeginGetResponse()和WebResponse.EndGetResponse(),你建议使用
并注意它也不会下载整个页面

of你所描述的数据。


我的程序所做的是开始将数据下载到某一点并且

然后关闭连接。关闭连接的原因是

因为它需要很长时间才能获得全部数据并且

并不总是需要获得所有数据。我使用的实现

Studio 6 / C ++做到了这一点并且运行良好。非常令人失望

.Net / C#不起作用。


您是否能够使用套接字方法获得更好的结果?


你们中的一些微软大师如何看待这个问题

并给出以下两个问题的答案:


1 )你怎么做才能下载整个数据页。

2)你怎么做才能以零延迟关闭连接(在
读取1个或更多2000字节后)数据缓冲区。


通过使用带有我给出的

示例代码的剪切粘贴并将其放入butten事件中,可以轻松设置此实验一个简单的

windows应用程序。


提前致谢!

" Joerg Jooss" <乔********* @ gmx.net>在消息新闻中写道:< ei ************* @ tk2msftngp13.phx.gbl> ...

No_Excuses写道:

全部,

我有兴趣阅读网页文本并对其进行解析。
在搜索这个新组后,我决定使用以下内容:
< ***** / ***************************************************************************************************************************************** ***********
String sTemp =


" http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi? ladd = teamknights& n
um = 238& showall = 1" ;;


WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse ();
流myStream = myWebResponse.GetResponseStream();

//默认编码是utf-8
StreamReader SR = new StreamReader(myStream);

Char [] buffer = new Char [2048];

//一次读取256个字符。
int count = SR.Read(buffer,0,2000); < br / >
// while(count> 0)
// {
//做一些处理 - 可能全部或部分读取
// count = SR.Read(buffer,0,2000);
//}

SR.Close(); //释放资源
myWebResponse.Close();
*******************************代码截止日期************************

此代码应该看起来非常熟悉,因为它遍布
新闻组和微软支持帮助页面。



我怀疑,因为代码不做它宣传的东西;-)

Char []缓冲区= new Char [2048];

//一次读取256个字符。
int count = SR.Read(buffer,0,2000);

为什么一个2 kB的缓冲区,你应该只读取256个字符,但是你为Read()调用指定2000个字符?

web页面上有一个大表,下载需要一段时间
(即使使用有线调制解调器)。

我观察的是以下内容。如果我打开并读取所有数据
(即
直到count> 0失败,然后踩过SR.Close()执行时间是立即的。如果我只读取2000字节作为上面的例子显示,当我走过SR.Close()时,它需要很长时间(对我来说大约10-15
秒)。这可能是巧合,但它似乎需要相同的/>好像我正在读取所有数据的时间。



嗯,这个特定页面是一个疯狂的6 MB大... Web服务器确实没有帮助客户端,因为没有提供Content-Length标头,
只是连接:关闭:

HTTP / 1.1 200 OK
日期:2004年4月10日星期六格林尼治标准时间10:20:31服务器:Apache / 1.3.24(Unix)mod_throttle / 3.1.2 PHP / 4.2.0
连接:关闭
内容类型:text / html WebClient和WebRequest / WebResponse都无法下载那个
野兽。两者都停止下载在完全相同的位置 - 我想底层的TCP流过早关闭。这必须是一些WinInet
默认行为(quirk?),因为当我使用一些使用普通TCP的古老的Visual J ++代码下载页面时,同样的事情发生在我身上。我想
我会用System.Net.Sockets编写一些普通的HTTP客户端,看看发生了什么。

(注意:如果Web服务器返回Content-Length标题,下载
页面工作正常。)

[...]

有谁知道如何终止页面的加载,所以我可以<消除延迟?我用CFC使用CInternetSession.OpenURL()在C ++中实现了这个并且没有这个问题。



使用异步I / O - 请参阅WebRequest.Abort() ,
WebResponse.BeginGetResponse()和WebResponse.EndGetResponse()。

干杯,



All,

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp = "http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&num=238&showall=1";

WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.

The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data. At this point
I am starting to believe that SR.Close() does not abort reading until
the entire web page has been recieved. This is not desired and in
fact I parse the data and desire to terminate loading because the
entire process is so slow and not necessary all of the time.

Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.

Thanks in advance.

Todd

解决方案


Maybe you should take some programming classes.

Sami
www.capehill.net

*** Sent via Developersdex http://www.developersdex.com ***
Don''t just participate in USENET...get rewarded for it!


Sami Vaaraniemi wrote:

Maybe you should take some programming classes.



Hey! It''s an arrogant spammer :-P


Joerg, thanks for taking the time to respond to my request for help.

I''m sorry about the comment. I believe the comment for 256 bytes
belonged to the original example which I copied the code. The 2000
byte buffer is the actual size I was using the in the program from
which I derived the sample code to illustrate my problem in this
newsgroup.

I did not notice that the entire page of data does not download. That
is a good catch. The code I wrote using Studio 6/C++ does not have
that problem (see original post).

I was able to implement the Asynchronous approach using
WebResponse.BeginGetResponse(), and WebResponse.EndGetResponse() that
you suggested and note that it also does not download the entire page
of data as you have described.

What my program does is start to download the data up to a point and
then close the connection. The reason for closing the connection is
because it takes so long to get the entire amount of data and there is
not always a need to get all of it. The implementation I have using
Studio 6/C++ does this and works perfectly. It is very dissapointing
that .Net/C# does not work.

Were you able to get better results using the socket approach?

How about some of you Microsoft gurus taking a look into this problem
and give answers to the following two questions:

1) What do you do to download the entire page of data.
2) What can you do to close the connection with zero delay (after
reading 1 or more 2000 byte buffers of data).

It is easy to set up this experiment by using cut paste with the
sample code I gave and putting it into a butten event of a simple
windows app.

Thanks in advance!
"Joerg Jooss" <jo*********@gmx.net> wrote in message news:<ei*************@tk2msftngp13.phx.gbl>...

No_Excuses wrote:

All,

I am interested in reading the text of a web page and parsing it.
After searching on this newgroup I decided to use the following:

******************************* START OF CODE ************************
String sTemp =


"http://cgi3.igl.net/cgi-bin/ladder/teamsql/team_view.cgi?ladd=teamknights&n
um=238&showall=1";


WebRequest myWebRequest = WebRequest.Create(sTemp);
WebResponse myWebResponse = myWebRequest.GetResponse();
Stream myStream = myWebResponse.GetResponseStream();

// default encoding is utf-8
StreamReader SR = new StreamReader( myStream );

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

//while (count > 0)
//{
// do some processing - may read all or part
// count = SR.Read(buffer, 0, 2000);
//}

SR.Close(); // Release the resources
myWebResponse.Close();
******************************* END OF CODE ************************

This code should look very familiar because it is all over the
newsgroup and Microsoft support help pages.



I doubt that, as the code doesn''t do what it advertises ;-)

Char[] buffer = new Char[2048];

// Read 256 charcters at a time.
int count = SR.Read( buffer, 0, 2000 );

Why a 2 kB buffer, when you''re supposedly reading only 256 chars, but you''re
specifying 2000 chars for the Read() call?

The web page has a big table on it and it takes a while to download
(even with a cable modem).

What I observe is the following. If I open and read all the data
(i.e.
until count > 0 fails, then stepping over SR.Close() execution time is
immediate. If I read only 2000 bytes as the above example shows, when
I step over SR.Close() it takes a long time (for me around 10-15
seconds). This may be a coincidence but it seems to take the same
amount of time as if I was reading all of the data.



Well, this particular page is an insane 6 MB large... the web server does
not help the client either, as there''s no Content-Length header provided,
just Connection: close:

HTTP/1.1 200 OK
Date: Sat, 10 Apr 2004 10:20:31 GMT
Server: Apache/1.3.24 (Unix) mod_throttle/3.1.2 PHP/4.2.0
Connection: close
Content-Type: text/html

Even more interestingly, I cannot even download the entire page at all...
neither WebClient nor WebRequest/WebResponse are able to download that
beast. Both stop downloading at the exact same position -- I guess the
underlying TCP stream is prematurely closed. This must be some WinInet
default behaviour (quirk?), as the same thing happens to me when I download
the page using some ancient old Visual J++ code that uses plain TCP. I think
I''ll write some plain HTTP client using System.Net.Sockets and see what
happens.

(Note: If the web server returns a Content-Length header, downloading the
page works just fine.)

[...]

Does anyone know how to terminate the loading of the page so I can
eliminate the delay? I had implemented this in C++ with MFC using
CInternetSession.OpenURL() and did not have this problem.



Use asynchronous I/O -- see WebRequest.Abort(),
WebResponse.BeginGetResponse(), and WebResponse.EndGetResponse().

Cheers,



这篇关于StreamReader.Close()响应很慢 - 请帮忙的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆