条件复杂的Spark SQL窗口函数 [英] Spark SQL window function with complex condition
问题描述
这可能是最容易通过示例解释的.假设我有一个用户登录到网站的DataFrame,例如:
This is probably easiest to explain through example. Suppose I have a DataFrame of user logins to a website, for instance:
scala> df.show(5)
+----------------+----------+
| user_name|login_date|
+----------------+----------+
|SirChillingtonIV|2012-01-04|
|Booooooo99900098|2012-01-04|
|Booooooo99900098|2012-01-06|
| OprahWinfreyJr|2012-01-10|
|SirChillingtonIV|2012-01-11|
+----------------+----------+
only showing top 5 rows
我想在此添加一列,以指示他们何时成为站点上的活跃用户.但是有一个警告:在一段时间内将用户视为活动用户,并且在此时间段之后,如果他们再次登录,则会重置其became_active
日期.假设此期限为 5天.然后,从上表派生的所需表将是这样的:
I would like to add to this a column indicating when they became an active user on the site. But there is one caveat: there is a time period during which a user is considered active, and after this period, if they log in again, their became_active
date resets. Suppose this period is 5 days. Then the desired table derived from the above table would be something like this:
+----------------+----------+-------------+
| user_name|login_date|became_active|
+----------------+----------+-------------+
|SirChillingtonIV|2012-01-04| 2012-01-04|
|Booooooo99900098|2012-01-04| 2012-01-04|
|Booooooo99900098|2012-01-06| 2012-01-04|
| OprahWinfreyJr|2012-01-10| 2012-01-10|
|SirChillingtonIV|2012-01-11| 2012-01-11|
+----------------+----------+-------------+
因此,特别是SirChillingtonIV的became_active
日期被重设,因为他们的第二次登录是在活动期到期之后进行的,但是Booooooo99900098的became_active
日期未被第二次登录,因为他/她处于活动状态时期.
So, in particular, SirChillingtonIV's became_active
date was reset because their second login came after the active period expired, but Booooooo99900098's became_active
date was not reset the second time he/she logged in, because it fell within the active period.
我最初的想法是将窗口函数与lag
一起使用,然后使用lag
ged值来填充became_active
列;例如,开始时大致如下:
My initial thought was to use window functions with lag
, and then using the lag
ged values to fill the became_active
column; for instance, something starting roughly like:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
val window = Window.partitionBy("user_name").orderBy("login_date")
val df2 = df.withColumn("tmp", lag("login_date", 1).over(window))
然后,如果tmp
是null
(即,如果是第一次登录)或login_date - tmp >= 5
然后是became_active = login_date
,则填写became_active
日期的规则将是;否则,请转到tmp
中的下一个最近值,并应用相同的规则.这暗示了一种递归方法,我在想像一种实现方法时遇到了麻烦.
Then, the rule to fill in the became_active
date would be, if tmp
is null
(i.e., if it's the first ever login) or if login_date - tmp >= 5
then became_active = login_date
; otherwise, go to the next most recent value in tmp
and apply the same rule. This suggests a recursive approach, which I'm having trouble imagining a way to implement.
我的问题:这是一种可行的方法吗?如果是,我如何才能返回"并查看tmp
的较早值,直到找到要停止的位置?据我所知,我无法遍历Spark SQL Column
的值.还有另一种方法可以达到这个结果吗?
My questions: Is this a viable approach, and if so, how can I "go back" and look at earlier values of tmp
until I find one where I stop? I can't, to my knowledge, iterate through values of a Spark SQL Column
. Is there another way to achieve this result?
推荐答案
这就是诀窍.导入一堆函数:
Here is the trick. Import a bunch of functions:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.{coalesce, datediff, lag, lit, min, sum}
定义窗口:
val userWindow = Window.partitionBy("user_name").orderBy("login_date")
val userSessionWindow = Window.partitionBy("user_name", "session")
找到新会话开始的时间点:
Find the points where new sessions starts:
val newSession = (coalesce(
datediff($"login_date", lag($"login_date", 1).over(userWindow)),
lit(0)
) > 5).cast("bigint")
val sessionized = df.withColumn("session", sum(newSession).over(userWindow))
查找每个会话的最早日期:
Find the earliest date per session:
val result = sessionized
.withColumn("became_active", min($"login_date").over(userSessionWindow))
.drop("session")
数据集定义为:
val df = Seq(
("SirChillingtonIV", "2012-01-04"), ("Booooooo99900098", "2012-01-04"),
("Booooooo99900098", "2012-01-06"), ("OprahWinfreyJr", "2012-01-10"),
("SirChillingtonIV", "2012-01-11"), ("SirChillingtonIV", "2012-01-14"),
("SirChillingtonIV", "2012-08-11")
).toDF("user_name", "login_date")
结果是:
+----------------+----------+-------------+
| user_name|login_date|became_active|
+----------------+----------+-------------+
| OprahWinfreyJr|2012-01-10| 2012-01-10|
|SirChillingtonIV|2012-01-04| 2012-01-04| <- The first session for user
|SirChillingtonIV|2012-01-11| 2012-01-11| <- The second session for user
|SirChillingtonIV|2012-01-14| 2012-01-11|
|SirChillingtonIV|2012-08-11| 2012-08-11| <- The third session for user
|Booooooo99900098|2012-01-04| 2012-01-04|
|Booooooo99900098|2012-01-06| 2012-01-04|
+----------------+----------+-------------+
这篇关于条件复杂的Spark SQL窗口函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!