使用dplyr删除重复的行 [英] Remove duplicated rows using dplyr
问题描述
我有一个这样的数据框架。
I have a data.frame like this -
set.seed(123)
df = data.frame(x=sample(0:1,10,replace=T),y=sample(0:1,10,replace=T),z=1:10)
> df
x y z
1 0 1 1
2 1 0 2
3 0 1 3
4 1 1 4
5 1 0 5
6 0 1 6
7 1 0 7
8 1 0 8
9 1 0 9
10 0 1 10
我想根据前两列删除重复的行。预期输出 -
I would like to remove duplicate rows based on first two columns. Expected output -
df[!duplicated(df[,1:2]),]
x y z
1 0 1 1
2 1 0 2
4 1 1 4
I我正在专门寻找使用 dplyr
包的解决方案。
I am specifically looking for a solution using dplyr
package.
推荐答案
library(dplyr)
set.seed(123)
df <- data.frame(
x = sample(0:1, 10, replace = T),
y = sample(0:1, 10, replace = T),
z = 1:10
)
一种方法是分组,然后只保留单个
行的组:
One approach would be to group, and then only keep groups with a single row:
df %>% group_by(x, y) %>% filter(row_number(z) == 1)
## Source: local data frame [3 x 3]
## Groups: x, y
##
## x y z
## 1 0 1 1
## 2 1 0 2
## 3 1 1 4
(在dplyr 0.2中,你不需要虚拟的 z
变量,只需
即可写入 row_number()== 1
)
(In dplyr 0.2 you won't need the dummy z
variable and will just be
able to write row_number() == 1
)
我也有一直在考虑添加
的 slice()
函数,如:
I've also been thinking about adding a slice()
function that would
work like:
df %>% group_by(x, y) %>% slice(from = 1, to = 1)
或者可以使用 unique()
的变体,让您选择要使用哪个
变量:
Or maybe a variation of unique()
that would let you select which
variables to use:
df %>% unique(x, y)
这篇关于使用dplyr删除重复的行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!