DownloadString返回404错误:网站需要一个User-Agent头 [英] DownloadString returns a 404 Error: Site needs a User-Agent Header

查看:590
本文介绍了DownloadString返回404错误:网站需要一个User-Agent头的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有精细的工作,直到一两天以前C#程序。我用下面的代码片段抢页面:

I have a C# program which worked fine until a day or two ago. I use the following snippet to grab a page:

string strSiteListPath = @"http://www.ngs.noaa.gov/CORS/dates_sites.txt";
Uri uriSiteListPath = new Uri(strSiteListPath);
System.Net.WebClient oWebClient = new System.Net.WebClient();
strStationList = oWebClient.DownloadString(uriSiteListPath);

但它始终会返回一个404 Not Found错误。该网页完全存在,欢迎您自己尝试一下。因为它的工作日前,并没有在我的code变了,我给想也许在某些方面改变了web服务器。这很好,它会发生,但究竟曾在这里发生了什么?

But it consistently returns a 404 Not Found error. That page completely exists, you are welcome to try it yourself. Because it worked days ago, and nothing in my code changed, I am given to think maybe the web-server changed in some way. That's fine, it'll happen, but what exactly has happened here?

为什么我可以浏览到该文件手动,但DownloadString无法获取文件?

Why can I browse to the file manually, but DownloadString fails to get the file?

编辑:

为了完整起见,code现在的样子:

For completeness, the code now looks like:

string strSiteListPath = @"http://www.ngs.noaa.gov/CORS/dates_sites.txt";
Uri uriSiteListPath = new Uri(strSiteListPath);

System.Net.WebClient oWebClient = new System.Net.WebClient();
oWebClient.Headers.Add("User-Agent", "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0");

strStationList = oWebClient.DownloadString(uriSiteListPath);

再次感谢,托马斯Levesque的!

Thanks again, Thomas Levesque!

推荐答案

显然,该网站需要你有一个有效的用户代理头。如果设置了头这样的事情:

Apparently the site requires that you have a valid User-Agent header. If you set that header to something like that:

Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0

然后请求正常工作。

Then the request works fine.

这篇关于DownloadString返回404错误:网站需要一个User-Agent头的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆