在pyspark数据框中显示不同的列值:python [英] show distinct column values in pyspark dataframe: python
问题描述
请为熊猫df['col'].unique()
建议pyspark数据框替代方案.
Please suggest pyspark dataframe alternative for Pandas df['col'].unique()
.
我想在pyspark数据框列中列出所有唯一值.
I want to list out all the unique values in a pyspark dataframe column.
不是SQL类型的方式(先注册模板,然后通过SQL查询不同的值).
Not the SQL type way (registertemplate then SQL query for distinct values).
我也不需要groupby->countDistinct
,相反,我想在该列中检查不同的值.
Also I don't need groupby->countDistinct
, instead I want to check distinct VALUES in that column.
推荐答案
假设我们正在使用以下数据表示形式(两列k
和v
,其中k
包含三个条目,两个独特的:
Let's assume we're working with the following representation of data (two columns, k
and v
, where k
contains three entries, two unique:
+---+---+
| k| v|
+---+---+
|foo| 1|
|bar| 2|
|foo| 3|
+---+---+
使用Pandas数据框:
With a Pandas dataframe:
import pandas as pd
p_df = pd.DataFrame([("foo", 1), ("bar", 2), ("foo", 3)], columns=("k", "v"))
p_df['k'].unique()
这将返回一个ndarray
,即array(['foo', 'bar'], dtype=object)
您要求提供熊猫df ['col'].unique()的pyspark数据框替代".现在,给定以下Spark数据框:
You asked for a "pyspark dataframe alternative for pandas df['col'].unique()". Now, given the following Spark dataframe:
s_df = sqlContext.createDataFrame([("foo", 1), ("bar", 2), ("foo", 3)], ('k', 'v'))
如果您希望Spark获得相同结果,即ndarray
,请使用toPandas()
:
If you want the same result from Spark, i.e. an ndarray
, use toPandas()
:
s_df.toPandas()['k'].unique()
或者,如果您不需要专门的ndarray
,而只需要列k
的唯一值的列表:
Alternatively, if you don't need an ndarray
specifically and just want a list of the unique values of column k
:
s_df.select('k').distinct().rdd.map(lambda r: r[0]).collect()
最后,您还可以按如下方式使用列表理解:
Finally, you can also use a list comprehension as follows:
[i.k for i in s_df.select('k').distinct().collect()]
这篇关于在pyspark数据框中显示不同的列值:python的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!