使用HttpWebRequest下载网页而没有关键敏感问题 [英] Use HttpWebRequest to download web pages without key sensitive issues

查看:66
本文介绍了使用HttpWebRequest下载网页而没有关键敏感问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用HttpWebRequest下载网页,而不会出现关键敏感问题

Use HttpWebRequest to download web pages without key sensitive issues

推荐答案

[更新:我不知道为什么,但是下面的两个示例现在都可以正常工作!最初,我在page2示例中也看到了403.也许是服务器问题?]

[update: I don't know why, but both examples below now work fine! Originally I was also seeing a 403 on the page2 example. Maybe it was a server issue?]

首先,WebClient更容易.实际上,我之前见过.事实证明,访问Wikipedia时URL中区分大小写.尝试确保您在向维基百科提出的请求中使用了相同的案例.

First, WebClient is easier. Actually, I've seen this before. It turned out to be case sensitivity in the url when accessing wikipedia; try ensuring that you have used the same case in your request to wikipedia.

[更新]正如Bruno Conde和gimel所观察到的,使用%27应该有助于使其保持一致(间歇性行为表明,某些Wikipedia服务器的配置与其他服务器不同)

[updated] As Bruno Conde and gimel observe, using %27 should help make it consistent (the intermittent behaviour suggest that maybe some wikipedia servers are configured differently to others)

我刚刚检查了一下,在这种情况下,案例问题似乎不是问题所在...但是,如果有效(没有),这将是请求页面的最简单方法:

I've just checked, and in this case the case issue doesn't seem to be the problem... however, if it worked (it doesn't), this would be the easiest way to request the page:

        using (WebClient wc = new WebClient())
        {
            string page1 = wc.DownloadString("http://en.wikipedia.org/wiki/Algeria");

            string page2 = wc.DownloadString("http://en.wikipedia.org/wiki/%27Abadilah");
        }

恐怕我无法考虑如何处理导致问题的领先撇号...

这篇关于使用HttpWebRequest下载网页而没有关键敏感问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆