PySpark 数据帧上的自定义聚合 [英] Custom aggregation on PySpark dataframes
问题描述
我有一个 PySpark DataFrame,其中一列作为一个热编码向量.我想在 groupby 之后通过向量加法聚合不同的一个热编码向量
例如df[userid,action] Row1: ["1234","[1,0,0]] Row2: ["1234", [0 1 0]]
我希望输出为行:["1234", [ 1 1 0]]
所以向量是按 userid
分组的所有向量的总和.>
我怎样才能做到这一点?PySpark sum 聚合操作不支持向量加法.
您有几个选择:
- 创建用户定义的聚合函数.问题是你需要 在 scala 中编写用户定义的聚合函数 和 包装它以在 python 中使用.
- 您可以使用 collect_list 函数将所有值收集到一个列表中,然后编写一个 UDF 将它们组合起来.
- 您可以转移到 RDD 并使用聚合或按键聚合.
两个选项 2 &3 会相对低效(同时消耗 CPU 和内存).
I have a PySpark DataFrame with one column as one hot encoded vectors. I want to aggregate the different one hot encoded vectors by vector addition after groupby
e.g. df[userid,action] Row1: ["1234","[1,0,0]] Row2: ["1234", [0 1 0]]
I want the output as row: ["1234", [ 1 1 0]]
so the vector is a sum of all vectors grouped by userid
.
How can I achieve this? PySpark sum aggregate operation does not support the vector addition.
You have several options:
- Create a user defined aggregate function. The problem is that you will need to write the user defined aggregate function in scala and wrap it to use in python.
- You can use the collect_list function to collect all values to a list and then write a UDF to combine them.
- You can move to RDD and use aggregate or aggregate by key.
Both options 2 & 3 would be relatively inefficient (costing both cpu and memory).
这篇关于PySpark 数据帧上的自定义聚合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!