如何使用rvest提取div标签之间的内容然后绑定行 [英] How to extract contents between div tags with rvest and then bind rows

查看:27
本文介绍了如何使用rvest提取div标签之间的内容然后绑定行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从该站点提取出现在 div 标签之间的数据:

I am trying to extract the data that appears between the div tags from this site:

http://bigbashboard.com/rankings/bbl/batsmen

它们像这样出现在左侧:

They appear on the left hand side like this:

Batsmen

    1 Matthew Wade 125
    2 Marcus Stoinis 120
    3 D'Arcy Short 116

我还需要右侧表格中显示的数据.我可以通过使用下面的代码得到它.

I also need the data that appears in the table to the right. I can get that by using the below code.

我有一个 csv 文件,它循环显示日期,然后将它们绑定在一起.

I have a csv file that cycles through the dates and then binds them together.

如何提取 div 标签之间的数据,然后将其与其他数据绑定在一起,这样我就有一个如下所示的数据框:

How can I extract the data between the div tags and then bind it together with the other data so that I have one data frame that looks like this:

  Rank  Name           Points  Dates                        I   R    HS   Ave    SR     4s  6s  100s 50s
   1    Matthew Wade   125     22 Dec 2018 - 30 Jan 2020    23  943  130  44.90 155.10  78  36  1     9
   2    Marcus Stoinis 120     21 Dec 2018 - 08 Feb 2020    30  1238 147  53.83 133.98  111 39  1     10
   3    D'Arcy Short   116     22 Dec 2018 - 30 Jan 2020    24  994  103  49.70 137.10  93  36  1     9

以上只是前 3 条记录的快照,但我需要出现在每一页上的所有记录.

The above is just a snap shot of the first 3 records but I would need all records that appear on each page.

我还想将页面地址中的日期添加到表格中作为第一列,因此当页面地址为例如时:

I would also like to add the date from the page address to the table as the first column, so when the page address is for example:

http://bigbashboard.com/rankings/bbl/batsmen/2018/01/24

我想像这样将 24/1/2018 的日期添加到表格中:

I would like to add the date of 24/1/2018 to the table like so:

 Date      Rank  Name           Points  Dates                       I   R   HS  Ave     SR      4s  6s  100s    50s
 24/01/18     1   Chris Lynn    167     21 Dec 2016 - 05 Jan 2018   9   436 98  87.20   173.02  33  32   0     4 
 24/01/18     2   D'Arcy Short  166     23 Dec 2016 - 20 Jan 2018   17  702 122 43.88   152.28  70  31   1     5
 24/01/18     4   Alex Carey    102     18 Jan 2017 - 22 Jan 2018   10  400 100 57.14   138.89  39  12   1     2

我的代码:

library(rvest)

#load csv file with the dates
df <- read.csv('G:/dates.csv')

year <- df[[2]]
month <- df[[3]]
day <- df[[4]]

#add leading zeros to dates
month <- stringr::str_pad(month, 2, side="left", pad="0")
day <- stringr::str_pad(day, 2, side="left", pad="0")


site <- paste('http://bigbashboard.com/rankings/bbl/batsmen/', year, month, day, sep="/")

#get contents from first table that appears on the right of the page
dfList <- lapply(site, function(i) {
  webpage <- read_html(i)
  draft_table <- html_nodes(webpage, 'table')
  draft <- html_table(draft_table)[[1]]
})
    
#attempt to get contents from second table that appears on the left between div tags
dfList2 <- lapply(site, function(i) {
  webpage <- read_html(i)
  draft_table <- html_nodes(webpage, 'div.col w25')
  #draft <- html_table(draft_table)[[1]]
})

#attempt to bind both tables together
 finaldf <- do.call(rbind, dfList1, dfList2)  

推荐答案

考虑以下工作流程

library(rvest)
library(xml2)
library(dplyr)
library(furrr)

batsmen <- function(x) {
  x <- html_nodes(x, "div.cf.rankings-page div div ol li a")
  xml_remove(html_nodes(x, "span.rank small, span[class^='pos'] em"))
  score <- html_text(html_nodes(x, "span.rank"))
  rank <- html_text(html_nodes(x, "span[class^='pos']"), trim = TRUE)
  xml_remove(html_nodes(x, "span"))
  tibble(Rank = rank, Name = html_text(x), Points = score)
}

stats_table <- function(x) {
  as_tibble(html_table(x)[[1L]])
}

read_rankings <- function(url) {
  ymd <- as.Date(paste0(tail(strsplit(url, "/")[[1L]], 3L), collapse = "-"))
  read_html(url) %>% {bind_cols(Date = ymd, batsmen(.), stats_table(.))}
}

mas_url <- "http://bigbashboard.com/rankings/bbl/batsmen"

timeline <- 
  read_html(mas_url) %>% 
  html_nodes("div.timeline span a") %>% 
  html_attr("href") %>% 
  url_absolute(mas_url)

# Use parallel processing for speed.
plan(multiprocess)
future_map_dfr(timeline[1:100], read_rankings) # I only scrape a few links for test.

输出

# A tibble: 9,250 x 14
   Date       Rank  Name           Points Dates                         I     R    HS   Ave    SR  `4s`  `6s` `100s` `50s`
   <date>     <chr> <chr>          <chr>  <chr>                     <int> <int> <int> <dbl> <dbl> <int> <int>  <int> <int>
 1 2020-02-08 1     Matthew Wade   125    22 Dec 2018 - 30 Jan 2020    23   943   130  44.9  155.    78    36      1     9
 2 2020-02-08 2     Marcus Stoinis 120    21 Dec 2018 - 08 Feb 2020    30  1238   147  53.8  134.   111    39      1    10
 3 2020-02-08 3     D'Arcy Short   116    22 Dec 2018 - 30 Jan 2020    24   994   103  49.7  137.    93    36      1     9
 4 2020-02-08 4     Alex Hales     115    17 Dec 2019 - 06 Feb 2020    17   576    85  38.4  147.    59    23      0     6
 5 2020-02-08 5     Aaron Finch    89     07 Jan 2019 - 27 Jan 2020    17   583   109  36.4  130.    41    24      1     4
 6 2020-02-08 6     Josh Inglis    87     26 Dec 2018 - 26 Jan 2020    18   517    73  28.7  149.    53    19      0     5
 7 2020-02-08 7     Travis Head    87     11 Jan 2019 - 01 Feb 2020    10   291    79  29.1  132.    22    13      0     1
 8 2020-02-08 8     Josh Philippe  84     22 Dec 2018 - 08 Feb 2020    31   791    86  34.4  140.    76    23      0     7
 9 2020-02-08 9     Shaun Marsh    82     24 Jan 2019 - 21 Jan 2020    15   547    96  39.1  128.    45    19      0     4
10 2020-02-08 10    Chris Lynn     78     19 Dec 2018 - 27 Jan 2020    27   772    94  32.2  137.    64    44      0     6
# ... with 9,240 more rows

变量timeline看起来像这样

> head(timeline)
[1] "http://bigbashboard.com/rankings/bbl/batsmen/2020/02/08" "http://bigbashboard.com/rankings/bbl/batsmen/2020/02/06"
[3] "http://bigbashboard.com/rankings/bbl/batsmen/2020/02/01" "http://bigbashboard.com/rankings/bbl/batsmen/2020/01/31"
[5] "http://bigbashboard.com/rankings/bbl/batsmen/2020/01/30" "http://bigbashboard.com/rankings/bbl/batsmen/2020/01/27"

它包含您可能从该网站获得的所有排名,因此您不必使用单独的 csv 文件来存储年、月和日.您也可以像我上面所做的那样选择要抓取的天数.

It contains all rankings you can possibly get from that website, so you don't have to use a separate csv file to store year, month and day. You may also select the days you want to scrape like what I did above.

这篇关于如何使用rvest提取div标签之间的内容然后绑定行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆