我想从网站内所有链接中获取所有文章内容 [英] I want ro get all article content from all links inside from an website

查看:79
本文介绍了我想从网站内所有链接中获取所有文章内容的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用任何网络爬取/抓取方法从网站中提取所有文章内容.

I want to extract all article content from an website using any web crawling/scraping methods.

问题是我可以从单个页面获取内容,但不能从其重定向链接获取内容. 有人请给我适当的解决方案

The problem is I can get content from a single page but not its redirecting links. Anyone please give me the proper solutions

import java.io.FileOutputStream;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.Reader;
import java.net.URI;
import java.net.URL;
import java.net.URLConnection;

import javax.swing.text.EditorKit;
import javax.swing.text.html.HTMLDocument;
import javax.swing.text.html.HTMLEditorKit;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class Main3 {
  public static void main(String[] argv) throws Exception {
    HTMLDocument doc = new HTMLDocument() {
      public HTMLEditorKit.ParserCallback getReader(int pos) {
        return new HTMLEditorKit.ParserCallback() {
          public void handleText(char[] data, int pos) {
            System.out.println(data);
          }
        };
      }
    };

    URL url = new URI("http://tamilblog.ishafoundation.org/").toURL();
    URLConnection conn = url.openConnection();
    Reader rd = new InputStreamReader(conn.getInputStream());
    OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("ram.txt"), "UTF-8");

    EditorKit kit = new HTMLEditorKit();
    kit.read(rd, doc, 0);
    try {
        Document docs = Jsoup.connect("http://tamilblog.ishafoundation.org/").get();

         Elements links = docs.select("a[href]");

         Elements elements = docs.select("*");
         System.out.println("Total Links :"+links.size());



         for (Element element : elements) {
             System.out.println(element.ownText());
         }
         for (Element link : links) {
            System.out.println(" * a: link :"+ link.attr("a:href"));
             System.out.println(" * a: text :"+ link.text());

            System.out.println(" * a: text :"+ link.text());
          System.out.println(" * a: Alt :"+ link.attr("alt"));
        System.out.println(link.attr("p"));
        }


    } catch (Exception e) {
        e.printStackTrace();
    }


  }
  }`

推荐答案

这是解决方案:

package com.github.davidepastore.stackoverflow34014436;

import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.Reader;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.URLConnection;

import javax.swing.text.BadLocationException;
import javax.swing.text.EditorKit;
import javax.swing.text.html.HTMLDocument;
import javax.swing.text.html.HTMLEditorKit;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

/**
 * Stackoverflow 34014436 question.
 *
 */
public class App {
    public static void main(String[] args) throws URISyntaxException,
            IOException, BadLocationException {
        HTMLDocument doc = new HTMLDocument() {
            public HTMLEditorKit.ParserCallback getReader(int pos) {
                return new HTMLEditorKit.ParserCallback() {
                    public void handleText(char[] data, int pos) {
                        System.out.println(data);
                    }
                };
            }
        };

        URL url = new URI("http://tamilblog.ishafoundation.org/").toURL();
        URLConnection conn = url.openConnection();
        Reader rd = new InputStreamReader(conn.getInputStream());
        OutputStreamWriter writer = new OutputStreamWriter(
                new FileOutputStream("ram.txt"), "UTF-8");

        EditorKit kit = new HTMLEditorKit();
        kit.read(rd, doc, 0);
        try {
            Document docs = Jsoup.connect(
                    "http://tamilblog.ishafoundation.org/").get();

            Elements links = docs.select("a[href]");

            Elements elements = docs.select("*");
            System.out.println("Total Links :" + links.size());

            for (Element element : elements) {
                System.out.println(element.ownText());
            }
            for (Element link : links) {
                String hrefUrl = link.attr("href");
                if (!"#".equals(hrefUrl) && !hrefUrl.isEmpty()) {
                    System.out.println(" * a: link :" + hrefUrl);
                    System.out.println(" * a: text :" + link.text());
                    writer.write(link.text() + " => " + hrefUrl + "\n");
                }
            }

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            writer.close();
        }
    }
}

在这里,我们使用writer将每个链接的文本写入ram.txt文件中.

Here we are using the writer to write the text of every link in the ram.txt file.

这篇关于我想从网站内所有链接中获取所有文章内容的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆