Spark DataFrame:计算按行均值(或任何聚合操作) [英] Spark DataFrame: Computing row-wise mean (or any aggregate operation)

查看:1459
本文介绍了Spark DataFrame:计算按行均值(或任何聚合操作)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在内存中加载了一个Spark DataFrame,我想对各列取均值(或任何聚合操作).我该怎么做? (在numpy中,这称为对axis=1进行的操作.)

I have a Spark DataFrame loaded up in memory, and I want to take the mean (or any aggregate operation) over the columns. How would I do that? (In numpy, this is known as taking an operation over axis=1).

如果要计算行(axis=0)下方的DataFrame平均值,则该数据已经内置:

If one were calculating the mean of the DataFrame down the rows (axis=0), then this is already built in:

from pyspark.sql import functions as F
F.mean(...)

但是有没有办法针对列中的条目以编程方式执行此操作?例如,从下面的DataFrame中

But is there a way to programmatically do this against the entries in the columns? For example, from the DataFrame below

+--+--+---+---+
|id|US| UK|Can|
+--+--+---+---+
| 1|50|  0|  0|
| 1| 0|100|  0|
| 1| 0|  0|125|
| 2|75|  0|  0|
+--+--+---+---+

省略id,则意味着

+------+
|  mean|
+------+
| 16.66|
| 33.33|
| 41.67|
| 25.00|
+------+

推荐答案

您需要的是这样的标准SQL:

All you need here is a standard SQL like this:

SELECT (US + UK + CAN) / 3 AS mean FROM df

可直接与SqlContext.sql一起使用或使用DSL表示

which can be used directly with SqlContext.sql or expressed using DSL

df.select(((col("UK") + col("US") + col("CAN")) / lit(3)).alias("mean"))

如果列数较多,则可以按以下方式生成表达式:

If you have a larger number of columns you can generate expression as follows:

from functools import reduce
from operator import add
from pyspark.sql.functions import col, lit

n = lit(len(df.columns) - 1.0)
rowMean  = (reduce(add, (col(x) for x in df.columns[1:])) / n).alias("mean")

df.select(rowMean)

rowMean  = (sum(col(x) for x in df.columns[1:]) / n).alias("mean")
df.select(rowMean)

最后在Scala中等效:

Finally its equivalent in Scala:

df.select(df.columns
  .drop(1)
  .map(col)
  .reduce(_ + _)
  .divide(df.columns.size - 1)
  .alias("mean"))

在更复杂的情况下,您可以使用array函数合并列,并使用UDF计算统计信息:

In a more complex scenario you can combine columns using array function and use an UDF to compute statistics:

import numpy as np
from pyspark.sql.functions import array, udf
from pyspark.sql.types import FloatType

combined = array(*(col(x) for x in df.columns[1:]))
median_udf = udf(lambda xs: float(np.median(xs)), FloatType())

df.select(median_udf(combined).alias("median"))

使用Scala API表示的相同操作:

The same operation expressed using Scala API:

val combined = array(df.columns.drop(1).map(col).map(_.cast(DoubleType)): _*)
val median_udf = udf((xs: Seq[Double]) => 
    breeze.stats.DescriptiveStats.percentile(xs, 0.5))

df.select(median_udf(combined).alias("median"))

Spark 2.4 起,一种替代方法是将值组合到数组中并应用aggregate表达式.参见例如通过处理null来火花Scala按行平均.

Since Spark 2.4 an alternative approach is to combine values into an array and apply aggregate expression. See for example Spark Scala row-wise average by handling null.

这篇关于Spark DataFrame:计算按行均值(或任何聚合操作)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆