在PySpark数据帧上进行自定义聚合 [英] Custom aggregation on PySpark dataframes
问题描述
我有一个PySpark DataFrame,其中一列是一种热编码向量。我想在groupby
I have a PySpark DataFrame with one column as one hot encoded vectors. I want to aggregate the different one hot encoded vectors by vector addition after groupby
e.g之后通过矢量加法来聚合不同的一个热编码矢量。 df [userid,action]第1行:[ 1234, [1,0,0]]第2行:[ 1234,[0 1 0]]
e.g. df[userid,action] Row1: ["1234","[1,0,0]] Row2: ["1234", [0 1 0]]
我希望将输出作为行: [ 1234,[1 1 0]]
所以向量是一个和由 userid
分组的所有向量中。
I want the output as row: ["1234", [ 1 1 0]]
so the vector is a sum of all vectors grouped by userid
.
我如何实现此目标?PySpark sum汇总操作不支持该向量
How can I achieve this? PySpark sum aggregate operation does not support the vector addition.
推荐答案
您有几种选择:
- 创建用户定义的聚合函数,问题在于您将需要在scala中编写用户定义的聚合函数和将其包装在python中使用。
- 您可以使用collect_list函数收集所有值到列表中,然后编写UDF将其组合。
- 您可以移动t
- Create a user defined aggregate function. The problem is that you will need to write the user defined aggregate function in scala and wrap it to use in python.
- You can use the collect_list function to collect all values to a list and then write a UDF to combine them.
- You can move to RDD and use aggregate or aggregate by key.
两种选择2& 3的效率相对较低(同时消耗cpu和内存)。
Both options 2 & 3 would be relatively inefficient (costing both cpu and memory).
这篇关于在PySpark数据帧上进行自定义聚合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!