使用 rvest 和 purrr 抓取 R,多页 [英] scraping with R using rvest and purrr, multiple pages
本文介绍了使用 rvest 和 purrr 抓取 R,多页的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试抓取一个数据库,其中包含有关丹麦某个地区以前出售的房屋的信息.我不仅要从第 1 页检索信息,还要从第 2、3、4 页等检索信息.
I am trying to scrape a database containing information about previously sold houses in an area of Denmark. I want to retrieve information from not only page 1, but also 2, 3, 4 etc.
我是 R 新手,但从教程中我得到了这个.
I am new to R but from an tutorial i ended up with this.
library(purrr)
library(rvest)
urlbase <- "https://www.boliga.dk/solgt/alle_boliger-4000ipostnr=4000&so=1&p=%d"
map_df(1:5,function(i){
cat(".")
page <- read_html(sprintf(urlbase,i))
data.frame(Address = html_text(html_nodes(page,".d-md-table-cell a")))
Price = html_text(html_nodes(page,".text-md-left+ .d-md-table-cell .text-right"))
Rooms = html_text(html_nodes(page,".d-md-table-cell:nth-child(5) .paddingR"))
m2 = html_text(html_nodes(page,".qtipped+ .d-md-table-cell .paddingR"))
stringsAsFactors = FALSE
}) -> BOLIGA.ROSKILDE
View(BOLIGA.ROSKILDE)
这给了我信息:
bind_rows_(x, .id) 中的错误:参数 1 必须有名称
Error in bind_rows_(x, .id) : Argument 1 must have names
欢迎任何帮助
推荐答案
试试这个:
library(rvest)
library(tidyverse)
url="https://www.boliga.dk/solgt/alle_boliger-4000ipostnr=4000?ipostnr=4000ipostnr&so=1&p=1"
# find number of pages in table
pgs<- ceiling(read_html(url)%>%
html_nodes(".d-print-none")%>%
html_nodes("b")%>%
html_text()%>%
gsub("[^\\d]+", "", ., perl=TRUE)%>%
as.numeric()
/40)
#scrap our table
scrap=function(pg){
url=paste0("https://www.boliga.dk/solgt/alle_boliger-4000ipostnr=4000?ipostnr=4000ipostnr&so=1&p=",pg)
return( read_html(url)%>%
html_node(".searchResultTable")%>%
html_table()%>%
.[,c(1,2,5,4)]%>%
magrittr::set_colnames(c("Address","Price","Rooms","m2"))%>%
mutate(m2=as.numeric(m2))
)
}
#purrr for each page
df=seq(1,pgs)%>%
map_df(.,scrap)
这篇关于使用 rvest 和 purrr 抓取 R,多页的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文