如何在Spark SQL中为每个组创建z分数 [英] How to create a z-score in Spark SQL for each group

查看:82
本文介绍了如何在Spark SQL中为每个组创建z分数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个看起来像这样的数据框

I have a dataframe which looks like this

        dSc     TranAmount
 1: 100021      79.64
 2: 100021      79.64
 3: 100021       0.16
 4: 100022      11.65
 5: 100022       0.36
 6: 100022       0.47
 7: 100025       0.17
 8: 100037       0.27
 9: 100056       0.27
10: 100063       0.13
11: 100079       0.13
12: 100091       0.15
13: 100101       0.22
14: 100108       0.14
15: 100109       0.04

现在,我想用每个TranAmount的z分数创建第三列

Now I want to create a third column with the z-score of each TranAmount which will be

(TranAmount-mean(TranAmount))/StdDev(TranAmount)

此处的均值和标准差将基于每个dSc的组

here mean and standard deviation will be based on groups of each dSc

现在我可以在Spark SQL中计算均值和标准差了.

Now I can calculate mean and standard deviation in Spark SQL.

(datafromdb
  .groupBy("dSc")
  .agg(datafromdb.dSc, func.avg("TranAmount") ,func.stddev_pop("TranAmount")))

但是我对如何在数据帧中使用z分数获得第三列感到困惑. 我将不胜感激任何指向实现此目标的正确方法的指针/

but I am at a loss on how to achieve a third column with the z-score in the data frame. I would appreciate any pointer to the right way of achieving this/

推荐答案

例如,您可以使用原始数据计算统计信息和join:

You can for example compute statistics and join with the original data:

stats = (df.groupBy("dsc")
  .agg(
      func.stddev_pop("TranAmount").alias("sd"), 
      func.avg("TranAmount").alias("avg")))

df.join(broadcast(stats), ["dsc"])

(df
    .join(func.broadcast(stats), ["dsc"])
    .select("dsc", "TranAmount", (df.TranAmount - stats.avg) / stats.sd))

或使用具有标准偏差公式的窗口函数 :

or use window functions with standard deviation formula:

from pyspark.sql.window import Window
import sys

def z_score_w(col, w):
    avg_ = func.avg(col).over(w)
    avg_sq = func.avg(col * col).over(w)
    sd_ = func.sqrt(avg_sq - avg_ * avg_)
    return (col - avg_) / sd_

w = Window().partitionBy("dsc").rowsBetween(-sys.maxsize, sys.maxsize)
df.withColumn("zscore", z_score_w(df.TranAmount, w))

这篇关于如何在Spark SQL中为每个组创建z分数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆