如何在 PySpark 中使用窗口函数? [英] How to use window functions in PySpark?

查看:24
本文介绍了如何在 PySpark 中使用窗口函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将一些 Windows 函数(ntilepercentRank)用于数据框,但我不知道如何使用它们.

I'm trying to use some windows functions (ntile and percentRank) for a data frame but I don't know how to use them.

有人可以帮我吗?在 PythonAPI 文档 没有关于它的示例.

Can anyone help me with this please? In the Python API documentation there are no examples about it.

具体来说,我试图在我的数据框中获取数字字段的分位数.

Specifically, I'm trying to get quantiles of a numeric field in my data frame.

我使用的是 spark 1.4.0.

I'm using spark 1.4.0.

推荐答案

为了能够使用窗口函数,你必须先创建一个窗口.定义与普通 SQL 几乎相同,这意味着您可以定义顺序、分区或两者.首先让我们创建一些虚拟数据:

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

import numpy as np
np.random.seed(1)

keys = ["foo"] * 10 + ["bar"] * 10
values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])

df = sqlContext.createDataFrame([
   {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])

确保您使用的是 HiveContext(仅限 Spark <2.0):

Make sure you're using HiveContext (Spark < 2.0 only):

from pyspark.sql import HiveContext

assert isinstance(sqlContext, HiveContext)

创建一个窗口:

from pyspark.sql.window import Window

w =  Window.partitionBy(df.k).orderBy(df.v)

相当于

(PARTITION BY k ORDER BY v) 

在 SQL 中.

根据经验,窗口定义应始终包含 PARTITION BY 子句,否则 Spark 会将所有数据移动到单个分区.ORDER BY 对于某些函数是必需的,而在不同情况下(通常是聚合)可能是可选的.

As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.

还有两个可选项可用于定义窗口跨度 - ROWS BETWEENRANGE BETWEEN.在这种特殊情况下,这些对我们没有用处.

There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.

最后我们可以将它用于查询:

Finally we can use it for a query:

from pyspark.sql.functions import percentRank, ntile

df.select(
    "k", "v",
    percentRank().over(w).alias("percent_rank"),
    ntile(3).over(w).alias("ntile3")
)

请注意,ntile 与分位数没有任何关系.

Note that ntile is not related in any way to the quantiles.

这篇关于如何在 PySpark 中使用窗口函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆