如何通过 pandas 或火花数据框在所有行中删除具有相同值的列? [英] How to drop columns which have same values in all rows via pandas or spark dataframe?
问题描述
假设我的数据类似于以下内容:
Suppose I've data similar to following:
index id name value value2 value3 data1 val5
0 345 name1 1 99 23 3 66
1 12 name2 1 99 23 2 66
5 2 name6 1 99 23 7 66
我们如何删除所有这些列,如( value
, value2
,code> value3 )其中所有行具有相同的值,在一个命令中或使用 python 的几个命令?
How can we drop all those columns like (value
, value2
, value3
) where all rows have same values, in one command or couple of commands using python ?
考虑到我们有很多列类似于 value
, value2
, value3
... value200
。
Consider we have many columns similar to value
,value2
,value3
...value200
.
输出:
index id name data1
0 345 name1 3
1 12 name2 2
5 2 name6 7
推荐答案
我们可以做的是 apply
nunique
计算df中唯一值的数量,并删除只有一个唯一值的列:
What we can do is apply
nunique
to calc the number of unique values in the df and drop the columns which only have a single unique value:
In [285]:
cols = list(df)
nunique = df.apply(pd.Series.nunique)
cols_to_drop = nunique[nunique == 1].index
df.drop(cols_to_drop, axis=1)
Out[285]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
另一种方法是只需 diff
数字列和总和s
他们:
Another way is to just diff
the numeric columns and sums
them:
In [298]:
cols = df.select_dtypes([np.number]).columns
diff = df[cols].diff().sum()
df.drop(diff[diff== 0].index, axis=1)
Out[298]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
另一种方法是使用相同值的列的标准偏差为零的属性:
Another approach is to use the property that the standard deviation will be zero for a column with the same value:
In [300]:
cols = df.select_dtypes([np.number]).columns
std = df[cols].std()
cols_to_drop = std[std==0].index
df.drop(cols_to_drop, axis=1)
Out[300]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
其实上面可以在一行中完成:
Actually the above can be done in a one-liner:
In [306]:
df.drop(df.std()[(df.std() == 0)].index, axis=1)
Out[306]:
index id name data1
0 0 345 name1 3
1 1 12 name2 2
2 5 2 name6 7
这篇关于如何通过 pandas 或火花数据框在所有行中删除具有相同值的列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!