从 PySpark 数组列中删除重复项 [英] Remove duplicates from PySpark array column
问题描述
我有一个包含 ArrayType(StringType())
列的 PySpark 数据框.此列包含我需要删除的数组内的重复字符串.例如,一行条目可能看起来像 [milk、bread、milk、toast]
.假设我的数据框名为 df
,我的列名为 arraycol
.我需要类似的东西:
I have a PySpark Dataframe that contains an ArrayType(StringType())
column. This column contains duplicate strings inside the array which I need to remove. For example, one row entry could look like [milk, bread, milk, toast]
. Let's say my dataframe is named df
and my column is named arraycol
. I need something like:
df = df.withColumn("arraycol_without_dupes", F.remove_dupes_from_array("arraycol"))
我的直觉是有一个简单的解决方案,但在浏览 stackoverflow 15 分钟后,我没有找到比分解列、删除完整数据帧上的重复项,然后再次分组更好的方法.必须有一种我没想到的更简单的方法,对吗?
My intution was that there exists a simple solution to this, but after browsing stackoverflow for 15 minutes I didn't find anything better than exploding the column, removing duplicates on the complete dataframe, then grouping again. There has got to be a simpler way that I just didn't think of, right?
我使用的是 Spark 2.4.0 版
I am using Spark version 2.4.0
推荐答案
对于 pyspark 2.4+ 版,您可以使用 pyspark.sql.functions.array_distinct
:
For pyspark version 2.4+, you can use pyspark.sql.functions.array_distinct
:
from pyspark.sql.functions import array_distinct
df = df.withColumn("arraycol_without_dupes", array_distinct("arraycol"))
<小时>
对于旧版本,您可以使用 API 函数使用 explode
+ groupBy
和 collect_set
执行此操作,但是 udf
在这里可能更有效:
For older versions, you can do this with the API functions using explode
+ groupBy
and collect_set
, but a udf
is probably more efficient here:
from pyspark.sql.functions import udf
remove_dupes_from_array = udf(lambda row: list(set(row)), ArrayType(StringType()))
df = df.withColumn("arraycol_without_dupes", remove_dupes_from_array("arraycol"))
这篇关于从 PySpark 数组列中删除重复项的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!