Pyspark RDD ReduceByKey 多功能 [英] Pyspark RDD ReduceByKey Multiple function
问题描述
我有一个名为 DF 的 PySpark DataFrame,带有 (K,V) 对.我想使用 ReduceByKey 应用多个函数.例如,我有以下三个简单的功能:
I have a PySpark DataFrame named DF with (K,V) pairs. I would like to apply multiple functions with ReduceByKey. For example, I have following three simple functions:
def sumFunc(a,b): return a+b
def maxFunc(a,b): return max(a,b)
def minFunc(a,b): return min(a,b)
当我只应用一个函数时,例如,以下三项工作:
When I apply only one function, e.g,, following three work:
DF.reduceByKey(sumFunc) #works
DF.reduceByKey(maxFunc) #works
DF.reduceByKey(minFunc) #works
但是,当我应用多个函数时,它不起作用,例如,以下不起作用.
But, when I apply more than one function, it does not work, e.g., followings do not work.
DF.reduceByKey(sumFunc, maxfunc, minFunc) #it does not work
DF.reduceByKey(sumFunc, maxfunc) #it does not work
DF.reduceByKey(maxfunc, minFunc) #it does not work
DF.reduceByKey(sumFunc, minFunc) #it does not work
我不想使用 groupByKey
因为它会减慢计算速度.
I do not want to use groupByKey
because it slows down the computation.
推荐答案
如果输入是 DataFrame
只需使用 agg
:
If input is a DataFrame
just use agg
:
import pyspark.sql.functions as sqlf
df = sc.parallelize([
("foo", 1.0), ("foo", 2.5), ("bar", -1.0), ("bar", 99.0)
]).toDF(["k", "v"])
df.groupBy("k").agg(sqlf.min("v"), sqlf.max("v"), sqlf.sum("v")).show()
## +---+------+------+------+
## | k|min(v)|max(v)|sum(v)|
## +---+------+------+------+
## |bar| -1.0| 99.0| 98.0|
## |foo| 1.0| 2.5| 3.5|
## +---+------+------+------+
对于 RDD,您可以使用 statcounter
:
With RDDs you can use statcounter
:
from pyspark.statcounter import StatCounter
rdd = df.rdd
stats = rdd.aggregateByKey(
StatCounter(), StatCounter.merge, StatCounter.mergeStats
).mapValues(lambda s: (s.min(), s.max(), s.sum()))
stats.collect()
## [('bar', (-1.0, 99.0, 98.0)), ('foo', (1.0, 2.5, 3.5))]
使用您的函数,您可以执行以下操作:
Using your functions you could do something like this:
def apply(x, y, funs=[minFunc, maxFunc, sumFunc]):
return [f(x_, y_) for f, x_, y_ in zip(*(funs, x, y))]
rdd.combineByKey(lambda x: (x, x, x), apply, apply).collect()
## [('bar', [-1.0, 99.0, 98.0]), ('foo', [1.0, 2.5, 3.5])]
这篇关于Pyspark RDD ReduceByKey 多功能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!