刮取网站 [英] Scraping a site

查看:87
本文介绍了刮取网站的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试编写一个警报系统,以定期清除投诉委员会的网站,以查找有关我的产品的任何投诉.我同样使用Jsoup.下面是给我错误的代码片段.

I am trying to write an alert system to scrape complaints board site periodically to look for any complaints about my product. I am using Jsoup for the same. Below is the the code fragment that gives me error.

doc = Jsoup.connect(finalUrl).timeout(10 * 1000).get();

这给了我错误

java.net.SocketException: Unexpected end of file from server

当我在浏览器中复制粘贴相同的finalUrl String时,它可以工作.然后,我尝试了简单的URL连接

When I copy paste the same finalUrl String in the browser, it works. I then tried simple URL connection

            BufferedReader br = null;
            try {
                URL a = new URL(finalUrl);
                URLConnection conn = a.openConnection();

                // open the stream and put it into BufferedReader
                br = new BufferedReader(new InputStreamReader(
                        conn.getInputStream()));
                doc = Jsoup.parse(br.toString());
            } catch (IOException e) {
                e.printStackTrace();
            }

但是事实证明,连接本身返回null(br为null).现在的问题是,为什么在粘贴到浏览器中的副本中使用相同的字符串可以打开网站而没有任何错误?

But as it turned out, the connection itself is returning null (br is null). Now the question is, why does the same string when copy pasted in browser opens the site without any error?

完整的堆栈跟踪如下:

java.net.SocketException: Unexpected end of file from server
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:774)
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:771)
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195)
    at ComplaintsBoardScraper.main(ComplaintsBoardScraper.java:46)

推荐答案

那个很棘手! :-)

服务器会阻止所有没有适当用户代理的请求.这就是为什么您的浏览器成功但Java失败的原因.

The server blocks all requests which don't have a proper user agent. And that’s why you succeeded with your browser but failed with Java.

幸运的是,更改用户代理在jsoup中不是一件大事:

Fortunately changing user agent is not a big thing in jsoup:

final String url = "http://www.complaintsboard.com/?search=justanswer.com&complaints=Complaints";
final String userAgent = "Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.7.3) Gecko/20040924 Epiphany/1.4.4 (Ubuntu)";

Document doc = Jsoup.connect(url) // you get a 'Connection' object here
                        .userAgent(userAgent) // ! set the user agent
                        .timeout(10 * 1000) // set timeout
                        .get(); // execute GET request

我已经找到了第一个用户代理……我想您也可以使用任何有效的代理.

I've taken the first user agent I found … I guess you can use any valid one instead too.

这篇关于刮取网站的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆