在Spark Dataframe中的窗口上方创建组ID [英] Create a group id over a window in Spark Dataframe

查看:100
本文介绍了在Spark Dataframe中的窗口上方创建组ID的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个数据框,我想在每个Window分区中提供ID.例如,我有

I have a dataframe where I want to give id's in each Window partition. For example I have

id | col |
1  |  a  |
2  |  a  |
3  |  b  |
4  |  c  |
5  |  c  |

所以我要(基于与col列分组)

So I want (based on grouping with column col)

id | group |
1  |  1    |
2  |  1    |
3  |  2    |
4  |  3    |
5  |  3    |

我想使用窗口函数,但是我仍然找不到为每个窗口分配ID的方法.我需要类似的东西:

I want to use a window function but I cannot find anyway to assign an Id to each window. I need something like:

w = Window().partitionBy('col')
df = df.withColumn("group", id().over(w)) 

有什么办法可以达到这样的目标吗? (我不能简单地将col用作组ID,因为我有兴趣在多个列上创建一个窗口)

Is there any way to achive somethong like that. (I cannot simply use col as a group id because I am interested in creating a window over multiple columns)

推荐答案

在Window函数上简单地使用dense_rank 内置函数应该会为您提供所需的结果

Simply using a dense_rank inbuilt function over Window function should give you your desired result as

from pyspark.sql import window as W
import pyspark.sql.functions as f
df.select('id', f.dense_rank().over(W.Window.orderBy('col')).alias('group')).show(truncate=False)

应该给您

+---+-----+
|id |group|
+---+-----+
|1  |1    |
|2  |1    |
|3  |2    |
|4  |3    |
|5  |3    |
+---+-----+

这篇关于在Spark Dataframe中的窗口上方创建组ID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆