在大型 data.table 中替换 NA 的最快方法 [英] Fastest way to replace NAs in a large data.table
问题描述
我有一个很大的data.table,许多缺失值分散在其 ~200k 行和 200 列中.我想尽可能有效地将这些 NA 值重新编码为零.
I have a large data.table, with many missing values scattered throughout its ~200k rows and 200 columns. I would like to re code those NA values to zeros as efficiently as possible.
我看到两个选项:
1:转换为data.frame,并使用像这样
2:一些很酷的data.table子设置命令
I see two options:
1: Convert to a data.frame, and use something like this
2: Some kind of cool data.table sub setting command
我会对类型 1 的相当有效的解决方案感到满意.转换为 data.frame 然后再返回到 data.table 不会花费太长时间.
I'll be happy with a fairly efficient solution of type 1. Converting to a data.frame and then back to a data.table won't take too long.
推荐答案
这是一个使用 data.table' 的解决方案s :=
运算符,基于 Andrie 和 Ramnath 的答案.
Here's a solution using data.table's :=
operator, building on Andrie and Ramnath's answers.
require(data.table) # v1.6.6
require(gdata) # v2.8.2
set.seed(1)
dt1 = create_dt(2e5, 200, 0.1)
dim(dt1)
[1] 200000 200 # more columns than Ramnath's answer which had 5 not 200
f_andrie = function(dt) remove_na(dt)
f_gdata = function(dt, un = 0) gdata::NAToUnknown(dt, un)
f_dowle = function(dt) { # see EDIT later for more elegant solution
na.replace = function(v,value=0) { v[is.na(v)] = value; v }
for (i in names(dt))
eval(parse(text=paste("dt[,",i,":=na.replace(",i,")]")))
}
system.time(a_gdata = f_gdata(dt1))
user system elapsed
18.805 12.301 134.985
system.time(a_andrie = f_andrie(dt1))
Error: cannot allocate vector of size 305.2 Mb
Timing stopped at: 14.541 7.764 68.285
system.time(f_dowle(dt1))
user system elapsed
7.452 4.144 19.590 # EDIT has faster than this
identical(a_gdata, dt1)
[1] TRUE
注意 f_dowle 通过引用更新了 dt1.如果需要本地副本,则需要显式调用 copy
函数来制作整个数据集的本地副本.data.table 的 setkey
、key<-
和 :=
不会在写入时复制.
Note that f_dowle updated dt1 by reference. If a local copy is required then an explicit call to the copy
function is needed to make a local copy of the whole dataset. data.table's setkey
, key<-
and :=
do not copy-on-write.
接下来,让我们看看 f_dowle 把时间花在哪里了.
Next, let's see where f_dowle is spending its time.
Rprof()
f_dowle(dt1)
Rprof(NULL)
summaryRprof()
$by.self
self.time self.pct total.time total.pct
"na.replace" 5.10 49.71 6.62 64.52
"[.data.table" 2.48 24.17 9.86 96.10
"is.na" 1.52 14.81 1.52 14.81
"gc" 0.22 2.14 0.22 2.14
"unique" 0.14 1.36 0.16 1.56
... snip ...
在那里,我将专注于 na.replace
和 is.na
,其中有一些向量副本和向量扫描.通过编写一个小的 na.replace C 函数,通过向量中的引用更新 NA
,这些可以很容易地消除.这至少会使我认为的 20 秒减半.任何R包中都存在这样的函数吗?
There, I would focus on na.replace
and is.na
, where there are a few vector copies and vector scans. Those can fairly easily be eliminated by writing a small na.replace C function that updates NA
by reference in the vector. That would at least halve the 20 seconds I think. Does such a function exist in any R package?
f_andrie
失败的原因可能是因为它复制了整个dt1
,或者创建了一个与整个dt1
一样大的逻辑矩阵,几次.其他 2 种方法一次对一列起作用(尽管我只简要地查看了 NAToUnknown
).
The reason f_andrie
fails may be because it copies the whole of dt1
, or creates a logical matrix as big as the whole of dt1
, a few times. The other 2 methods work on one column at a time (although I only briefly looked at NAToUnknown
).
编辑(Ramnath 在评论中要求的更优雅的解决方案):
EDIT (more elegant solution as requested by Ramnath in comments) :
f_dowle2 = function(DT) {
for (i in names(DT))
DT[is.na(get(i)), (i):=0]
}
system.time(f_dowle2(dt1))
user system elapsed
6.468 0.760 7.250 # faster, too
identical(a_gdata, dt1)
[1] TRUE
我希望我一开始就是这样做的!
I wish I did it that way to start with!
EDIT2(1 年后,现在)
还有set()
.如果有很多列被循环,这会更快,因为它避免了在循环中调用 [,:=,]
的(小)开销.set
是一个可循环的 :=
.参见 ?set
.
There is also set()
. This can be faster if there are a lot of column being looped through, as it avoids the (small) overhead of calling [,:=,]
in a loop. set
is a loopable :=
. See ?set
.
f_dowle3 = function(DT) {
# either of the following for loops
# by name :
for (j in names(DT))
set(DT,which(is.na(DT[[j]])),j,0)
# or by number (slightly faster than by name) :
for (j in seq_len(ncol(DT)))
set(DT,which(is.na(DT[[j]])),j,0)
}
这篇关于在大型 data.table 中替换 NA 的最快方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!