合并较大数据的有效替代方法.框架 R [英] Efficient alternatives to merge for larger data.frames R
问题描述
我正在寻找一种有效的(计算机资源方面和学习/实施方面)方法来合并两个较大的(大小>100 万/300 KB RData 文件)数据帧.
I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size>1 million / 300 KB RData file) data frames.
base R 中的merge"和 plyr 中的join"似乎用尽了我所有的内存,导致我的系统崩溃.
"merge" in base R and "join" in plyr appear to use up all my memory effectively crashing my system.
示例
加载测试数据框
试试
test.merged<-merge(test, test)
或
test.merged<-join(test, test, type="all")
- -
以下帖子提供了合并和替代方案的列表:
如何加入(合并)数据帧(内部,外,左,右)?
The following post provides a list of merge and alternatives:
How to join (merge) data frames (inner, outer, left, right)?
以下允许对象大小检查:
https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/
The following allows object size inspection:
https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/
推荐答案
这是强制性的 data.table
示例:
Here's the obligatory data.table
example:
library(data.table)
## Fix up your example data.frame so that the columns aren't all factors
## (not necessary, but shows that data.table can now use numeric columns as keys)
cols <- c(1:5, 7:10)
test[cols] <- lapply(cols, FUN=function(X) as.numeric(as.character(test[[X]])))
test[11] <- as.logical(test[[11]])
## Create two data.tables with which to demonstrate a data.table merge
dt <- data.table(test, key=names(test))
dt2 <- copy(dt)
## Add to each one a unique non-keyed column
dt$X <- seq_len(nrow(dt))
dt2$Y <- rev(seq_len(nrow(dt)))
## Merge them based on the keyed columns (in both cases, all but the last) to ...
## (1) create a new data.table
dt3 <- dt[dt2]
## (2) or (poss. minimizing memory usage), just add column Y from dt2 to dt
dt[dt2,Y:=Y]
这篇关于合并较大数据的有效替代方法.框架 R的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!