使用Web Driver Selenium和JSoup分页 [英] Pagination with Web Driver Selenium and JSoup

查看:262
本文介绍了使用Web Driver Selenium和JSoup分页的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发一个应用程序,该应用程序使用JSoup从网站获取数据.我能够获得正常数据.

I'm developing an app that takes data from a website with JSoup. I was able to get the normal data.

但是现在我需要对其进行分页.有人告诉我,它必须与Web Driver Selenium一起使用.但是我不知道如何与他合作,有人可以告诉我我该怎么做吗?

But now I need to implement a pagination on it. I was told it would have to be with Web Driver, Selenium. But I do not know how to work with him, could someone tell me how I can do it?

public class MainActivity extends AppCompatActivity {

   private String url = "http://www.yudiz.com/blog/";
   private ArrayList<String> mAuthorNameList = new ArrayList<>();
   private ArrayList<String> mBlogUploadDateList = new ArrayList<>();
   private ArrayList<String> mBlogTitleList = new ArrayList<>();

   @Override
   protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       new Description().execute();

   }

   private class Description extends AsyncTask<Void, Void, Void> {

       @Override
       protected Void doInBackground(Void... params) {
           try {
               // Connect to the web site
               Document mBlogDocument = Jsoup.connect(url).get();
               // Using Elements to get the Meta data
               Elements mElementDataSize = mBlogDocument.select("div[class=author-date]");
               // Locate the content attribute
               int mElementSize = mElementDataSize.size();

               for (int i = 0; i < mElementSize; i++) {
                   Elements mElementAuthorName = mBlogDocument.select("span[class=vcard author post-author test]").select("a").eq(i);
                   String mAuthorName = mElementAuthorName.text();

                   Elements mElementBlogUploadDate = mBlogDocument.select("span[class=post-date updated]").eq(i);
                   String mBlogUploadDate = mElementBlogUploadDate.text();

                   Elements mElementBlogTitle = mBlogDocument.select("h2[class=entry-title]").select("a").eq(i);
                   String mBlogTitle = mElementBlogTitle.text();

                   mAuthorNameList.add(mAuthorName);
                   mBlogUploadDateList.add(mBlogUploadDate);
                   mBlogTitleList.add(mBlogTitle);
               }
           } catch (IOException e) {
               e.printStackTrace();
           }
           return null;
       }

       @Override
       protected void onPostExecute(Void result) {
           // Set description into TextView

           RecyclerView mRecyclerView = (RecyclerView)findViewById(R.id.act_recyclerview);

           DataAdapter mDataAdapter = new DataAdapter(MainActivity.this, mBlogTitleList, mAuthorNameList, mBlogUploadDateList);
           RecyclerView.LayoutManager mLayoutManager = new LinearLayoutManager(getApplicationContext());
           mRecyclerView.setLayoutManager(mLayoutManager);
           mRecyclerView.setAdapter(mDataAdapter);

       }
   }
}

推荐答案

问题陈述(据我了解):抓取工具应能够转到下一页,直到使用末尾可用的分页选项完成所有页面为止博客页面.

Problem statement (as per my understanding): Scraper should be able to go to the next page until all pages are done using the pagination options available at the end of the blog page.

现在,如果我们检查分页中的下一个按钮,则可以看到以下html. 一个class ="next_page" href ="http://www.yudiz.com/blog/page/2/"

Now if we inspect the next button in the pagination, we can see the following html. a class="next_page" href="http://www.yudiz.com/blog/page/2/"

现在,我们需要指示Jsoup在循环的下一个迭代中选择此动态url,以抓取数据.可以使用以下方法完成此操作:

Now we need to instruct Jsoup to pick up this dynamic url in the next iteration of the loop to scrap data. This can be done using the following approach:

        String url = "http://www.yudiz.com/blog/";
        while (url!=null){
            try {
                Document doc = Jsoup.connect(url).get();
                url = null;
                System.out.println(doc.getElementsByTag("title").text());
                for (Element urls : doc.getElementsByClass("next_page")){
                    //perform your data extractions here.
                    url = urls != null ? urls.absUrl("href") : null;
                }               
            } catch (IOException e) {
                e.printStackTrace();
            }
        }

这篇关于使用Web Driver Selenium和JSoup分页的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆