如何通过pandas或spark数据框删除所有行中具有相同值的列? [英] How to drop columns which have same values in all rows via pandas or spark dataframe?
问题描述
假设我有类似于以下的数据:
index id name value value2 value3 data1 val50 345 姓名 1 1 99 23 3 661 12 姓名2 1 99 23 2 665 2 姓名6 1 99 23 7 66
我们如何在一个命令中删除所有列,例如 (value
, value2
, value3
),其中所有行都具有相同的值或使用 python 的几个命令?
考虑到我们有许多类似于 value
、value2
、value3
...value200
的列.>
输出:
索引 id 名称 data10 345 姓名1 31 12 姓名2 25 2 姓名6 7
我们可以做的是使用 nunique
来计算数据框每一列中唯一值的数量,并删除那些只有一个唯一值:
在 [285] 中:nunique = df.nunique()cols_to_drop = nunique[nunique == 1].indexdf.drop(cols_to_drop,axis=1)出[285]:索引 ID 名称 data10 0 345 姓名1 31 1 12 姓名2 22 5 2 姓名6 7
另一种方法是对数字列进行 diff
,取 abs
值和 sums
它们:
在 [298] 中:cols = df.select_dtypes([np.number]).columnsdiff = df[cols].diff().abs().sum()df.drop(diff[diff==0].index,axis=1)出[298]:索引 ID 名称 data10 0 345 姓名1 31 1 12 姓名2 22 5 2 姓名6 7
另一种方法是使用具有相同值的列的标准偏差为零的属性:
在 [300] 中:cols = df.select_dtypes([np.number]).columnsstd = df[cols].std()cols_to_drop = std[std==0].indexdf.drop(cols_to_drop,axis=1)出[300]:索引 ID 名称 data10 0 345 姓名1 31 1 12 姓名2 22 5 2 姓名6 7
其实上面的可以单行:
在 [306] 中:df.drop(df.std()[(df.std() == 0)].index,axis=1)出[306]:索引 ID 名称 data10 0 345 姓名1 31 1 12 姓名2 22 5 2 姓名6 7
Suppose I've data similar to following:
index id name value value2 value3 data1 val5
0 345 name1 1 99 23 3 66
1 12 name2 1 99 23 2 66
5 2 name6 1 99 23 7 66
How can we drop all those columns like (value
, value2
, value3
) where all rows have the same values, in one command or couple of commands using python?
Consider we have many columns similar to value
, value2
, value3
...value200
.
Output:
index id name data1
0 345 name1 3
1 12 name2 2
5 2 name6 7
What we can do is use nunique
to calculate the number of unique values in each column of the dataframe, and drop the columns which only have a single unique value:
In [285]:
nunique = df.nunique()
cols_to_drop = nunique[nunique == 1].index
df.drop(cols_to_drop, axis=1)
Out[285]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Another way is to just diff
the numeric columns, take abs
values and sums
them:
In [298]:
cols = df.select_dtypes([np.number]).columns
diff = df[cols].diff().abs().sum()
df.drop(diff[diff== 0].index, axis=1)
Out[298]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Another approach is to use the property that the standard deviation will be zero for a column with the same value:
In [300]:
cols = df.select_dtypes([np.number]).columns
std = df[cols].std()
cols_to_drop = std[std==0].index
df.drop(cols_to_drop, axis=1)
Out[300]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
Actually the above can be done in a one-liner:
In [306]:
df.drop(df.std()[(df.std() == 0)].index, axis=1)
Out[306]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
这篇关于如何通过pandas或spark数据框删除所有行中具有相同值的列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!