使用RSelum从网站(报纸档案)中抓取多个网页 [英] Scraping several webpages from a website (newspaper archive) using RSelenium

查看:0
本文介绍了使用RSelum从网站(报纸档案)中抓取多个网页的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

根据解释,我设法从newspaper archive中抓取了一页here

现在,我正在尝试通过运行一段代码来自动化访问页面列表的过程。 制作URL列表很容易,因为报纸的档案中有类似的链接模式:

https://en.trend.az/archive/2021-XX-XX

问题在于编写一个循环来抓取标题、日期、时间、类别等数据。为简单起见,我尝试只使用2021-09-30到2021-10-02之间的文章标题。

## Setting data frames

d1 <- as.Date("2021-09-30")
d2 <- as.Date("2021-10-02")

list_of_url <- character()   # or str_c()

## Generating subpage list 
 
for (i in format(seq(d1, d2, by="days"), format="%Y-%m-%d"))  {
  list_of_url[i] <- str_c ("https://en.trend.az", "/archive/", i)

# Launching browser

driver <- rsDriver(browser = c("firefox"))  #Version 93.0 (64-bit)
remDr <- driver[["client"]]
remDr$errorDetails
remDr$navigate(list_of_url[i])
   
   remDr0$findElement(using = "xpath", value = '/html/body/div[1]/div/div[1]/h1')$clickElement()
   
   webElem <- remDr$findElement("css", "body")
#scrolling to the end of webpage, to load all articles 
for (i in 1:25){
  Sys.sleep(2)
  webElem$sendKeysToElement(list(key = "end"))
} 

page <- read_html(remDr$getPageSource()[[1]])

# Scraping article headlines

get_headline <- page %>%
html_nodes('.category-article') %>% html_nodes('.article-title') %>% 
  html_text()
get_time <- str_sub(get_time, start= -5)

length(get_time)
   }
}

总长度应该是157+166+140=463。事实上,我甚至无法从一个页面(Length(Get_Time)=126)收集所有数据

我认为在循环中的第一组命令之后,我为指定的3个日期获得了三个remDr,但它们后来没有被独立识别。

因此,我尝试在page <-

之前或之后的第一个循环中启动第二个循环
  for (remDr0 in remDr) {
page <- read_html(remDr0$getPageSource()[[1]])
# substituted all remDr-s below with remDr0

page <- read_html(remDr$getPageSource()[[1]])
for (page0 in page)
# substituted all page-s below with page0

但是,这些尝试以不同的错误结束。

我非常感谢专家的帮助,因为这是我第一次将R用于此类目的。

希望有可能纠正我创建的现有循环,或者甚至建议一条较短的路径,例如,通过创建function

推荐答案

略有加宽,可抓取多个类别

    library(RSelenium)
    library(dplyr)
    library(rvest)

提及日期期间

    d1 <- as.Date("2021-09-30")
    d2 <- as.Date("2021-10-02")
    dt = seq(d1, d2, by="days")#contains the date sequence
    
    #launch browser 
    driver <- rsDriver(browser = c("firefox"))  
    remDr <- driver[["client"]]
    
### `get_headline`  Function for newspaper headlines 

    get_headline = function(x){
      link = paste0( 'https://en.trend.az/archive/', x)
      remDr$navigate(link)
      remDr$findElement(using = "xpath", value = '/html/body/div[1]/div/div[1]/h1')$clickElement()
      webElem <- remDr$findElement("css", "body")
      #scrolling to the end of webpage, to load all articles 
      for (i in 1:25){
        Sys.sleep(1)
        webElem$sendKeysToElement(list(key = "end"))
      } 
      
      headlines = remDr$getPageSource()[[1]] %>% 
        read_html() %>%
        html_nodes('.category-article') %>% html_nodes('.article-title') %>% 
        html_text()
      headlines 
      return(headlines)
    }

get_time发布时函数

get_time <- function(x){
  link = paste0( 'https://en.trend.az/archive/', x)
  remDr$navigate(link)
  remDr$findElement(using = "xpath", value = '/html/body/div[1]/div/div[1]/h1')$clickElement()
  webElem <- remDr$findElement("css", "body")
  #scrolling to the end of webpage, to load all articles 
  for (i in 1:25){
    Sys.sleep(1)
    webElem$sendKeysToElement(list(key = "end"))
  } 
  
  # Addressing selector of time on the website
  
  time <- remDr$getPageSource()[[1]] %>%
    read_html() %>%
    html_nodes('.category-article') %>% html_nodes('.article-date') %>% 
    html_text() %>%
    str_sub(start= -5)
  time
  return(time)
}

一页/天的所有文章编号

get_number <- function(x){
  link = paste0( 'https://en.trend.az/archive/', x)
  remDr$navigate(link)
  remDr$findElement(using = "xpath", value = '/html/body/div[1]/div/div[1]/h1')$clickElement()
  webElem <- remDr$findElement("css", "body")
  #scrolling to the end of webpage, to load all articles 
  for (i in 1:25){
    Sys.sleep(1)
    webElem$sendKeysToElement(list(key = "end"))
  } 
  
  # Addressing selectors of headlines on the website
  
  headline <- remDr$getPageSource()[[1]] %>% 
    read_html() %>%
    html_nodes('.category-article') %>% html_nodes('.article-title') %>% 
    html_text()
  number <- seq(1:length(headline))
  return(number)
}

将所有函数集合到tibble

get_data_table <- function(x){

      # Extract the Basic information from the HTML
      headline <- get_headline(x)
      time <- get_time(x)
      headline_number <- get_number(x)

      # Combine into a tibble
      combined_data <- tibble(Num = headline_number,
                              Article = headline,
                              Time = time) 
}

使用lapply循环访问dt中的所有日期

    df = lapply(dt, get_data_table)

这篇关于使用RSelum从网站(报纸档案)中抓取多个网页的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆