Spark DataFrame:计算行均值(或任何聚合操作) [英] Spark DataFrame: Computing row-wise mean (or any aggregate operation)
问题描述
我在内存中加载了一个 Spark DataFrame,我想对列取平均值(或任何聚合操作).我该怎么做?(在 numpy
中,这被称为对 axis=1
进行操作).
I have a Spark DataFrame loaded up in memory, and I want to take the mean (or any aggregate operation) over the columns. How would I do that? (In numpy
, this is known as taking an operation over axis=1
).
如果计算数据帧的平均值(axis=0
),那么这已经是内置的:
If one were calculating the mean of the DataFrame down the rows (axis=0
), then this is already built in:
from pyspark.sql import functions as F
F.mean(...)
但是有没有办法以编程方式针对列中的条目执行此操作?例如,来自下面的DataFrame
But is there a way to programmatically do this against the entries in the columns? For example, from the DataFrame below
+--+--+---+---+
|id|US| UK|Can|
+--+--+---+---+
| 1|50| 0| 0|
| 1| 0|100| 0|
| 1| 0| 0|125|
| 2|75| 0| 0|
+--+--+---+---+
省略id
,意味着
+------+
| mean|
+------+
| 16.66|
| 33.33|
| 41.67|
| 25.00|
+------+
推荐答案
这里你只需要一个标准的 SQL,就像这样:
All you need here is a standard SQL like this:
SELECT (US + UK + CAN) / 3 AS mean FROM df
可以直接和SqlContext.sql
一起使用,也可以用DSL表示
which can be used directly with SqlContext.sql
or expressed using DSL
df.select(((col("UK") + col("US") + col("CAN")) / lit(3)).alias("mean"))
如果您有大量的列,您可以按如下方式生成表达式:
If you have a larger number of columns you can generate expression as follows:
from functools import reduce
from operator import add
from pyspark.sql.functions import col, lit
n = lit(len(df.columns) - 1.0)
rowMean = (reduce(add, (col(x) for x in df.columns[1:])) / n).alias("mean")
df.select(rowMean)
或
rowMean = (sum(col(x) for x in df.columns[1:]) / n).alias("mean")
df.select(rowMean)
最后它在 Scala 中的等价物:
Finally its equivalent in Scala:
df.select(df.columns
.drop(1)
.map(col)
.reduce(_ + _)
.divide(df.columns.size - 1)
.alias("mean"))
在更复杂的场景中,您可以使用 array
函数组合列并使用 UDF 来计算统计数据:
In a more complex scenario you can combine columns using array
function and use an UDF to compute statistics:
import numpy as np
from pyspark.sql.functions import array, udf
from pyspark.sql.types import FloatType
combined = array(*(col(x) for x in df.columns[1:]))
median_udf = udf(lambda xs: float(np.median(xs)), FloatType())
df.select(median_udf(combined).alias("median"))
使用 Scala API 表达的相同操作:
The same operation expressed using Scala API:
val combined = array(df.columns.drop(1).map(col).map(_.cast(DoubleType)): _*)
val median_udf = udf((xs: Seq[Double]) =>
breeze.stats.DescriptiveStats.percentile(xs, 0.5))
df.select(median_udf(combined).alias("median"))
自 Spark 2.4 起,另一种方法是将值组合到数组中并应用 aggregate
表达式.参见例如通过处理空值来激发 Scala 行平均.
Since Spark 2.4 an alternative approach is to combine values into an array and apply aggregate
expression. See for example Spark Scala row-wise average by handling null.
这篇关于Spark DataFrame:计算行均值(或任何聚合操作)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!