如何在Spark SQL中为每个组创建z分数 [英] How to create a z-score in Spark SQL for each group
问题描述
我有一个看起来像这样的数据框
I have a dataframe which looks like this
dSc TranAmount
1: 100021 79.64
2: 100021 79.64
3: 100021 0.16
4: 100022 11.65
5: 100022 0.36
6: 100022 0.47
7: 100025 0.17
8: 100037 0.27
9: 100056 0.27
10: 100063 0.13
11: 100079 0.13
12: 100091 0.15
13: 100101 0.22
14: 100108 0.14
15: 100109 0.04
现在,我想用每个TranAmount
的z分数创建第三列
Now I want to create a third column with the z-score of each TranAmount
which will be
(TranAmount-mean(TranAmount))/StdDev(TranAmount)
此处的均值和标准差将基于每个dSc的组
here mean and standard deviation will be based on groups of each dSc
现在我可以在Spark SQL中计算均值和标准差了.
Now I can calculate mean and standard deviation in Spark SQL.
(datafromdb
.groupBy("dSc")
.agg(datafromdb.dSc, func.avg("TranAmount") ,func.stddev_pop("TranAmount")))
但是我对如何在数据帧中使用z分数获得第三列感到困惑. 我将不胜感激任何指向实现此目标的正确方法的指针/
but I am at a loss on how to achieve a third column with the z-score in the data frame. I would appreciate any pointer to the right way of achieving this/
推荐答案
例如,您可以使用原始数据计算统计信息和join
:
You can for example compute statistics and join
with the original data:
stats = (df.groupBy("dsc")
.agg(
func.stddev_pop("TranAmount").alias("sd"),
func.avg("TranAmount").alias("avg")))
df.join(broadcast(stats), ["dsc"])
(df
.join(func.broadcast(stats), ["dsc"])
.select("dsc", "TranAmount", (df.TranAmount - stats.avg) / stats.sd))
or use window functions with standard deviation formula:
from pyspark.sql.window import Window
import sys
def z_score_w(col, w):
avg_ = func.avg(col).over(w)
avg_sq = func.avg(col * col).over(w)
sd_ = func.sqrt(avg_sq - avg_ * avg_)
return (col - avg_) / sd_
w = Window().partitionBy("dsc").rowsBetween(-sys.maxsize, sys.maxsize)
df.withColumn("zscore", z_score_w(df.TranAmount, w))
这篇关于如何在Spark SQL中为每个组创建z分数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!