MSXML2.XMLHTTP [英] MSXML2.XMLHTTP

查看:85
本文介绍了MSXML2.XMLHTTP的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一个小脚本,从远程网站抓取两个CSV文件[指向数据文件的链接]

,解析它们并在滚动div中显示它们。

第一个文件有超过27k的记录,第二个文件有更少。它可以很快地检索数据但是编写页面需要一段时间。


有没有更好的替代方法?

这是我的页面:
http://kiddanger.com/lab /getsaveurl.asp


这是检索数据的相关代码:


函数strQuote(strURL)

dim objXML

set objXML = CreateObject(" MSXML2.ServerXMLHTTP")

objXML.Open" GET",strURL,False

objXML.Send

strQuote = objXML.ResponseText

set objXML = nothing

结束功能


我将数据拆分成一个数组,然后将其拆分成一个新数组,因为

分别是换行符和换行符。


TIA ...


-

Roland Hall

/ *此信息的分发是希望它有用,不t

没有任何保证;甚至没有适销性的暗示保证

或特定用途的适用性。 * /

Technet脚本中心 - http:// www .microsoft.com / technet / scriptcenter /

WSH 5.6文档 - http://msdn.microsoft.com/downloads/list/webdev.asp

MSDN Library - http://msdn.microsoft.com/library/default.asp

I wrote a small script that grabs two CSV files [links to the data files]
from a remote web site, parses them out and displays them in scrolling divs.
The first file has a little over 27k records, the second has less. It
retrieves the data pretty quick but it takes awhile to write the page.

Is there a better alternative to this approach?
This is my page:
http://kiddanger.com/lab/getsaveurl.asp

This is the relevant code to retrieve the data:

function strQuote(strURL)
dim objXML
set objXML = CreateObject("MSXML2.ServerXMLHTTP")
objXML.Open "GET", strURL, False
objXML.Send
strQuote = objXML.ResponseText
set objXML = nothing
end function

I split the data into an array and then split that into a new array because
the delimeters are line feed and comma, respectively.

TIA...

--
Roland Hall
/* This information is distributed in the hope that it will be useful, but
without any warranty; without even the implied warranty of merchantability
or fitness for a particular purpose. */
Technet Script Center - http://www.microsoft.com/technet/scriptcenter/
WSH 5.6 Documentation - http://msdn.microsoft.com/downloads/list/webdev.asp
MSDN Library - http://msdn.microsoft.com/library/default.asp

推荐答案

Roland Hall写道:
Roland Hall wrote:
我写了一个小脚本,从远程网络抓取两个CSV文件[链接到数据
文件]站点,解析出来并在滚动div中显示它们。第一个文件有超过27k的记录,
第二个文件少。它可以非常快速地检索数据,但是编写页面需要一段时间。

这种方法有更好的替代方案吗?
这是我的页面:
< a rel =nofollowhref =http://kiddanger.com/lab/getsaveurl.asptarget =_ blank> http://kiddanger.com/lab/getsaveurl.asp

这是检索数据的相关代码:

函数strQuote(strURL)
dim objXML
set objXML = CreateObject(" MSXML2.ServerXMLHTTP")
objXML.Open" GET",strURL,False
objXML.Send
strQuote = objXML.ResponseText
设置objXML =无
结束功能
因为分界符分别是换行符和逗号。

TIA ...
I wrote a small script that grabs two CSV files [links to the data
files] from a remote web site, parses them out and displays them in
scrolling divs. The first file has a little over 27k records, the
second has less. It retrieves the data pretty quick but it takes
awhile to write the page.

Is there a better alternative to this approach?
This is my page:
http://kiddanger.com/lab/getsaveurl.asp

This is the relevant code to retrieve the data:

function strQuote(strURL)
dim objXML
set objXML = CreateObject("MSXML2.ServerXMLHTTP")
objXML.Open "GET", strURL, False
objXML.Send
strQuote = objXML.ResponseText
set objXML = nothing
end function

I split the data into an array and then split that into a new array
because the delimeters are line feed and comma, respectively.

TIA...




对此发表评论非常困难。你已经确定了将数据写入页面的过程中的瓶颈,因此strQuote函数并不是相关的,是吗?你对数组内容的处理方式似乎更有意义,至少对我而言。


有人(我认为可能是Chris Hohmann)发布了几周前用于生成大块html的不同技术的分析

你可能会觉得有趣。


鲍勃Barrows

-

Microsoft MVP - ASP / ASP.NET

请回复新闻组。我的From

标题中列出的电子邮件帐户是我的垃圾邮件陷阱,因此我不经常检查它。通过发布到新闻组,您将获得更快的回复。



It''s pretty tough to comment on this. You''ve identified the bottleneck as
the process of writing the data to the page, so the strQuote function is not
relevant, is it? What you do with the array contents seems to be more
relevant, at least to me.

Somebody (I think it might have been Chris Hohmann) posted an analysis of
different techniques for generating large blocks of html a few weeks ago
that you may find interesting.

Bob Barrows
--
Microsoft MVP -- ASP/ASP.NET
Please reply to the newsgroup. The email account listed in my From
header is my spam trap, so I don''t check it very often. You will get a
quicker response by posting to the newsgroup.


" Bob Barrows [MVP]"在消息中写道

新闻:uC ************** @ TK2MSFTNGP11.phx.gbl ...

:Roland Hall写道:

:>我写了一个小脚本,抓取两个CSV文件[链接到数据

:>来自远程网站的文件]解析出来并显示在

:>滚动div。第一个文件有超过27k的记录,

:>第二个少了。它可以非常快速地检索数据,但需要

:>一段时间来写这个页面。

:>

:>这种方法有更好的替代方案吗?

:>这是我的页面:

:> http://kiddanger.com/lab/getsaveurl.asp

:>

:>这是检索数据的相关代码:

:>

:>函数strQuote(strURL)

:>昏暗的objXML

:> set objXML = CreateObject(" MSXML2.ServerXMLHTTP")

:> objXML.OpenGET,strURL,False

:> objXML.Send

:> strQuote = objXML.ResponseText

:>设置objXML =无

:>结束功能

:>

:>我将数据拆分成一个数组,然后将其拆分为一个新数组

:>因为分隔符分别是换行符和逗号。

:>

:> TIA ...

:>



:评论这个很难。你已经确定了瓶颈为

:将数据写入页面的过程,因此strQuote函数是

不是

:有关,是吗?你对阵列内容的处理似乎更多

:相关,至少对我而言。


你好鲍勃。谢谢你的回复。


也许吧。我假设由于我的

开关上的活动指示灯而检索到数据。我实际上没有把计时器放进去,我想这将是下一次

测试。




:有人(我认为可能是Chris Hohmann)发布了一个分析

:几周前用于生成大块html的不同技术

:你可能觉得有趣。


我在这个NG中搜索了所有Chris的帖子并且没有找到任何东西。

然后我搜索了你做的参考并没有'''找不到任何方式

。这是我解析数据的子程序,也许有人会发布一些有助于加快速度的东西。


sub strWrite(str)

dim arr,i,arr2,j

arr = split(str,vbLf)

prt("< fieldset>< legend style =" ;" font-weight:bold"">"& arr(0)&""&

strURL&"< / legend>")

prt("< div style ="" height:200px; overflow:auto; width:950px"">")

prt( "< table style ="" padding:3px"">")

for i = 1 to ubound(arr)

arr2 = split (arr(i),,,)
如果i = 1则
然后

prt("< tr style ="" font-weight:粗体"">")

其他

如果我mod 2 = 0那么

prt("< tr style = "" background-color:#ddd"">")

else

prt("< tr> ;")

结束如果

结束如果

为j = 0到ubound(arr2)

prt (QUOT;< TD>" &安培; arr2(j))

next

next

prt("< / table>")

prt("< / div>")

prt("< / fieldset>")

end sub

这些是两个文件的调用:


dim strURL

strURL =" http://neustar.us/reports/ rgp / domains_in_rgp.csv"

strWrite strQuote(strURL)

strURL =" http://neustar.us/reports/rgp/domains_out_rgp.csv"

strWrite strQuote(strUrl)


我对我的缓冲区和一些变量进行了一些更改,并且它显然更快地增加了b $ b。解析数据仍然需要大约4-5秒,但我不确定

,如果那个数字那么糟糕的话。


我正在测试两个链接,一个在Internet上,另一个在我的Intranet上。

互联网链接通常几乎同时显示它们。内联网

显示第一个文件,然后几乎是下一个文件的延迟,这是我预期的。

http://kiddanger.com/lab/getsaveurl.asp 互联网
http://netfraud.us/asp/rgpr.asp Intranet


我想知道我是否将所有内容写入字符串然后只写了一次写入

语句,如果这样会更快。有什么想法吗?


-

Roland Hall

/ *这些信息的分发是希望它有用,但

没有任何保证;甚至没有适销性的暗示保证

或特定用途的适用性。 * /

Technet脚本中心 - http:// www .microsoft.com / technet / scriptcenter /

WSH 5.6文档 - http://msdn.microsoft.com/downloads/list/webdev.asp

MSDN Library - http://msdn.microsoft.com/library/default.asp
"Bob Barrows [MVP]" wrote in message
news:uC**************@TK2MSFTNGP11.phx.gbl...
: Roland Hall wrote:
: > I wrote a small script that grabs two CSV files [links to the data
: > files] from a remote web site, parses them out and displays them in
: > scrolling divs. The first file has a little over 27k records, the
: > second has less. It retrieves the data pretty quick but it takes
: > awhile to write the page.
: >
: > Is there a better alternative to this approach?
: > This is my page:
: > http://kiddanger.com/lab/getsaveurl.asp
: >
: > This is the relevant code to retrieve the data:
: >
: > function strQuote(strURL)
: > dim objXML
: > set objXML = CreateObject("MSXML2.ServerXMLHTTP")
: > objXML.Open "GET", strURL, False
: > objXML.Send
: > strQuote = objXML.ResponseText
: > set objXML = nothing
: > end function
: >
: > I split the data into an array and then split that into a new array
: > because the delimeters are line feed and comma, respectively.
: >
: > TIA...
: >
:
: It''s pretty tough to comment on this. You''ve identified the bottleneck as
: the process of writing the data to the page, so the strQuote function is
not
: relevant, is it? What you do with the array contents seems to be more
: relevant, at least to me.

Hi Bob. Thanks for responding.

Perhaps. I''m assuming the data is retrieved due to the activity light on my
switch. I have not actually put timers in, which I guess would be the next
test.

:
: Somebody (I think it might have been Chris Hohmann) posted an analysis of
: different techniques for generating large blocks of html a few weeks ago
: that you may find interesting.

I searched in this NG for all of Chris'' posting and didn''t find anything.
Then I searched for the reference you made and didn''t find anything that way
either. Here is my subroutine for parsing the data and perhaps someone will
notice something that will help speed it up.

sub strWrite(str)
dim arr, i, arr2, j
arr = split(str,vbLf)
prt("<fieldset><legend style=""font-weight: bold"">" & arr(0) & " " &
strURL & "</legend>")
prt("<div style=""height: 200px; overflow: auto; width: 950px"">")
prt("<table style=""padding: 3px"">")
for i = 1 to ubound(arr)
arr2 = split(arr(i),",")
if i = 1 then
prt("<tr style=""font-weight: bold"">")
else
if i mod 2 = 0 then
prt("<tr style=""background-color: #ddd"">")
else
prt("<tr>")
end if
end if
for j = 0 to ubound(arr2)
prt("<td>" & arr2(j))
next
next
prt("</table>")
prt("</div>")
prt("</fieldset>")
end sub

These are the calls for the two files:

dim strURL
strURL = "http://neustar.us/reports/rgp/domains_in_rgp.csv"
strWrite strQuote(strURL)
strURL = "http://neustar.us/reports/rgp/domains_out_rgp.csv"
strWrite strQuote(strUrl)

I made some changes to my buffer and some variables and it''s noticably
faster. It still takes about 4-5 seconds to parse the data but I''m not sure
if that''s all that bad for that amount.

I''m testing with two links, one on the Internet and one on my Intranet. The
Internet link normally displays them almost simultaneously. The Intranet
displays the first file, then almost as much of a delay for the next, which
is what I expected.

http://kiddanger.com/lab/getsaveurl.asp Internet
http://netfraud.us/asp/rgpr.asp Intranet

I wonder if I wrote everything to a string and then made only one write
statement if that would be faster. Any ideas?

--
Roland Hall
/* This information is distributed in the hope that it will be useful, but
without any warranty; without even the implied warranty of merchantability
or fitness for a particular purpose. */
Technet Script Center - http://www.microsoft.com/technet/scriptcenter/
WSH 5.6 Documentation - http://msdn.microsoft.com/downloads/list/webdev.asp
MSDN Library - http://msdn.microsoft.com/library/default.asp


我将记录计数添加到图例中,现在我知道为什么第二个是更快的
a。 1/10记录数量。
I added the record count to the legend and now I know why the second one is
a lot faster. 1/10 the amount of records.


这篇关于MSXML2.XMLHTTP的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆