循环URL并将信息存储在R中 [英] Loop URL and store info in R

查看:48
本文介绍了循环URL并将信息存储在R中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试编写一个for循环,该循环将遍历许多网站并提取一些元素,并将结果存储在R中的表中.到目前为止,这是我的事情,只是不确定如何启动for循环,或将所有结果复制到一个变量中,以便以后导出.

I'm trying to write a for loop that will loop through many websites and extract a few elements, and store the results in a table in R. Here's my go so far, just not sure how to start the for loop, or copy all results into one variable to be exported later.

library("dplyr")
library("rvest")
library("leaflet")
library("ggmap")


url <- c(html("http://www.webiste_name.com/")

agent <- html_nodes(url,"h1 span")
fnames<-html_nodes(url, "#offNumber_mainLocContent span")
address <- html_nodes(url,"#locStreetContent_mainLocContent")

scrape<-t(c(html_text(agent),html_text(fnames),html_text(address)))


View(scrape)

推荐答案

我会接受 lapply .

代码看起来像这样:

library("rvest")
library("dplyr")

#a vector of urls you want to scrape
URLs <- c("http://...1", "http://...2", ....)

df <- lapply(URLs, function(u){

      html.obj <- read_html(u)
      agent <- html_nodes(html.obj,"h1 span") %>% html_text
      fnames<-html_nodes(html.obj, "#offNumber_mainLocContent span") %>% html_text
      address <- html_nodes(html.obj,"#locStreetContent_mainLocContent") %>% html_text

     data.frame(Agent=agent, Fnames=fnames, Address=address)
})

df <- do.all(rbind, df)

View(df)

这篇关于循环URL并将信息存储在R中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆