如何阅读网站内容? [英] how to read the content of a website?

查看:87
本文介绍了如何阅读网站内容?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想阅读网站的内容,并使用c#和asp.net将其存储在文件中.我知道我们可以使用httpwebrequest来阅读它.但是是否也可以读取所有可用的链接数据?

例如:假设我想阅读 http://www.msn.com ,我可以直接给url并可以读取没有问题的主页数据.但是在这里,msn.com页面的主页中包含了很多链接,我也想阅读这些页面的内容.有可能吗?

有人可以给我打电话吗?

预先感谢

解决方案

  1. 定义网址队列

  2. 将主页网址添加到队列中

  3. 队列不为空时

3.1 currentUrl = Dequeue()

3.2读取当前网址

3.3使用正则表达式从当前页面中提取所有网址.

3.4将所有网址添加到队列中

您将必须将队列中的URL限制为某种深度或某个域,否则,您将尝试下载整个Internet:)

i want to read the content of a website and store it in a file by using c# and asp.net. I know we can read it by using httpwebrequest. But is it possible to read the all available links data also?

Ex: suppose i want to read http://www.msn.com i can directly give the url and can read the home page data that is no issue. But here that msn.com page contains so many links in the home page I want to read those pages content also. Is it possible?

Can somebody give me a starup to do this?

Thanks in advance

解决方案

  1. define queue of urls

  2. add main page url to queue

  3. while queue is not empy

3.1 currentUrl = Dequeue()

3.2 read current url

3.3 exctarct all urls from current page using regexp.

3.4 add all urls to the queue

You will have to limit the urls in queue to some sort of depth or to some domain, otherwise you will try to download the entire internet :)

这篇关于如何阅读网站内容?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆