直接从R中的网址读取gzip压缩的csv [英] Read gzipped csv directly from a url in R
问题描述
我希望下载压缩后的csv并将其作为R对象加载,而不先将其保存到磁盘.我可以对压缩文件执行此操作,但似乎无法使其与gzfile
或gzcon
一起使用.
I'm looking to download a gzipped csv and load it as an R object without saving it first to disk. I can do this with zipped files but can't seem to get it to work with gzfile
or gzcon
.
示例:
grabRemote <- function(url) {
temp <- tempfile()
download.file(url, temp)
aap.file <- read.csv(gzfile(temp), as.is = TRUE)
unlink(temp)
return(aap.file)
}
grabRemote("http://dumps.wikimedia.org/other/articlefeedback/aa_combined-20110321.csv.gz")
这会下载(小)gz压缩文件,其中包含维基百科文章反馈数据(不是重要,但只是说明它不是巨大或邪恶的).
That downloads a (small) gz compressed file containing Wikipedia article feedback data (not important, but just to indicate it isn't giant or nefarious).
我拥有的代码可以正常工作,但是我觉得我依靠创建和销毁临时文件来丢失一些非常明显的东西.
The code I have works fine but I feel like I'm missing something very obvious by resorting to creating and destroying a temporary file.
推荐答案
我几乎可以肯定我曾经回答过这个问题.结果是R的 Connections API(file()
,url()
,pipe()
,...)可以即时进行解压缩,我认为您不能对远程http进行解压缩对象.
I am almost certain I answered this question once before. The upshot is that Connections API of R (file()
, url()
, pipe()
, ...) can do decompression on the fly, I do not think you can do it for remote http objects.
您已经描述了非常简单的两步:将download.file()
与tempfile()
结果一起用作第二个参数,以获取压缩文件,然后从中读取文件.作为tempfile()
对象,它将在R会话结束时自动清除,因此我建议的一个较小修复是跳过unlink()
(但是我喜欢显式清除,因此您最好保留它)
So do the very two-step you have described: use download.file()
with a tempfile()
result as second argument to fetch the compressed file, and then read from it. As tempfile()
object, it will get cleaned up automatically at the end of your R session so the one minor fix I can suggest is to skip the unlink()
(but then I like explicit cleanups, so you may as well keep it).
修改::
con <- gzcon(url(paste("http://dumps.wikimedia.org/other/articlefeedback/",
"aa_combined-20110321.csv.gz", sep="")))
txt <- readLines(con)
dat <- read.csv(textConnection(txt))
dim(dat)
# [1] 1490 19
summary(dat[,1:3])
# aa_page_id page_namespace page_title
# Min. : 324 Min. :0 United_States : 79
# 1st Qu.: 88568 1st Qu.:0 2011_NBA_Playoffs : 52
# Median : 2445733 Median :0 IPad_2 : 43
# Mean : 8279600 Mean :0 IPod_Touch : 38
# 3rd Qu.:16179920 3rd Qu.:0 True_Grit_(2010_film): 38
# Max. :31230028 Max. :0 IPhone_4 : 26
# (Other) :1214
关键是提示gzcon
帮助它可以对现有流进行解压缩.然后,我们需要略微绕过readLines
并通过textConnection
进行读取,因为read.csv
想要在数据中来回移动(我想验证列宽).
The key was the hint the gzcon
help that it can put decompression around an existing stream. We then need the slight detour of readLines
and reading via textConnection
from that as read.csv
wants to go back and forth in the data (to validate column width, I presume).
这篇关于直接从R中的网址读取gzip压缩的csv的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!