提取Facebook页面的链接 [英] Extracting links of a facebook page

查看:609
本文介绍了提取Facebook页面的链接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何提取Facebook页面的所有链接。我可以使用jsoup进行解压缩并传递like链接作为参数来提取所有用户喜欢该页面的信息

How can I extract all the links of a facebook page. Can I extract it using jsoup and pass "like" link as parameter to extract all the user's info who liked that particular page

private static String readAll(Reader rd) throws IOException 
{
StringBuilder sb = new StringBuilder();
int cp;
while ((cp = rd.read()) != -1) 
{
  sb.append((char) cp);
}
return sb.toString();
}

 public static JSONObject readurl(String url) throws IOException, JSONException
 {
 InputStream is = new URL(url).openStream();
 try 
 {
  BufferedReader rd = new BufferedReader
(new InputStreamReader(is, Charset.forName("UTF-8")));
  String jsonText = readAll(rd);
  JSONObject json = new JSONObject(jsonText);

  return json;
} 
finally
{
  is.close();
}
}
public static void main(String[] args) throws IOException, 
JSONException,  FacebookException 
{
  try
  {

    System.out.println("\nEnter the search string:");
    @SuppressWarnings("resource")
    Scanner sc=new Scanner(System.in);
    String s=sc.nextLine();
    JSONObject json = readurl("https://graph.facebook.com/"+s);

    System.out.println(json);
}} 

我可以修改这个并整理这个代码。以下代码提取特定页面的所有链接。我尝试到上述代码,但它不工作

CAN i MODIFY THIS AND INTEGRATE THIS CODE. BELOW CODE EXTRACTS ALL LINKS OF A PARTICULAR PAGE. i TRIED TO THE ABOVE CODE BUT IT'S NOT WORKING

 String url = "http://www.firstpost.com/tag/crime-in-india";
  Document doc = Jsoup.connect(url).get();
  Elements links = doc.getElementsByTag("a");
   System.out.println(links.size());

    for (Element link : links) 
    {
        System.out.println(link.absUrl("href") +trim(link.text(), 35));     
    }
  }

  public static String trim(String s, int width) {
    if (s.length() > width)
        return s.substring(0, width-1) + ".";
    else
        return s;
  }
 }


推荐答案

的作品,但我不知道你可以使用jsoup这个我宁愿看看casperjs或phantomjs

Kind of works, but im not sure you could use jsoup for this I would rather look into casperjs or phantomjs

import java.io.IOException;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class getFaceBookLinks {

    public static Elements getElementsByTag_then_FilterBySelector (String tag, String httplink, String selector){
        Document doc = null;
        try {
            doc = Jsoup.connect(httplink).get();
        } catch (IOException e) {
            e.printStackTrace();
        }
        Elements links = doc.getElementsByTag(tag);
        return links.select(selector);
    }

    //Test functionality
    public static void main(String[] args){
        // The class name for the like links on facebook is UFILikeLink
        Elements likeLinks = getElementsByTag_then_FilterBySelector("a", "http://www.facebook.com", ".UFILikeLink");        
        System.out.println(likeLinks);

    }

}

这篇关于提取Facebook页面的链接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆