使用 dplyr 删除重复行 [英] Remove duplicated rows using dplyr
问题描述
我有一个像这样的 data.frame -
I have a data.frame like this -
set.seed(123)
df = data.frame(x=sample(0:1,10,replace=T),y=sample(0:1,10,replace=T),z=1:10)
> df
x y z
1 0 1 1
2 1 0 2
3 0 1 3
4 1 1 4
5 1 0 5
6 0 1 6
7 1 0 7
8 1 0 8
9 1 0 9
10 0 1 10
我想根据前两列删除重复的行.预期输出 -
I would like to remove duplicate rows based on first two columns. Expected output -
df[!duplicated(df[,1:2]),]
x y z
1 0 1 1
2 1 0 2
4 1 1 4
我专门寻找使用 dplyr
包的解决方案.
I am specifically looking for a solution using dplyr
package.
推荐答案
注意:dplyr
现在包含用于此目的的 distinct
函数.
Note: dplyr
now contains the distinct
function for this purpose.
以下原答案:
library(dplyr)
set.seed(123)
df <- data.frame(
x = sample(0:1, 10, replace = T),
y = sample(0:1, 10, replace = T),
z = 1:10
)
一种方法是分组,然后只保留第一行:
One approach would be to group, and then only keep the first row:
df %>% group_by(x, y) %>% filter(row_number(z) == 1)
## Source: local data frame [3 x 3]
## Groups: x, y
##
## x y z
## 1 0 1 1
## 2 1 0 2
## 3 1 1 4
(在 dplyr 0.2 中,您不需要虚拟 z
变量,而只是能够写row_number() == 1
)
(In dplyr 0.2 you won't need the dummy z
variable and will just be
able to write row_number() == 1
)
我也一直在考虑添加一个 slice()
函数像这样工作:
I've also been thinking about adding a slice()
function that would
work like:
df %>% group_by(x, y) %>% slice(from = 1, to = 1)
或者可能是 unique()
的一个变体,它可以让你选择哪个要使用的变量:
Or maybe a variation of unique()
that would let you select which
variables to use:
df %>% unique(x, y)
这篇关于使用 dplyr 删除重复行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!