在列上collect_list之后的PySpark reduceByKey聚合 [英] PySpark reduceByKey aggregation after collect_list on a column

查看:222
本文介绍了在列上collect_list之后的PySpark reduceByKey聚合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想以以下示例根据collect_list收集的状态"进行汇总.

I want to take following example to do my aggregation according to 'states' collected by collect_list.

states = sc.parallelize(["TX","TX","CA","TX","CA"])
states.map(lambda x:(x,1)).reduceByKey(operator.add).collect()
#printed output: [('TX', 3), ('CA', 2)]

我的代码:

from pyspark import SparkContext,SparkConf
from pyspark.sql.session import SparkSession
from pyspark.sql.functions import collect_list
import operator
conf = SparkConf().setMaster("local")
conf = conf.setAppName("test")
sc = SparkContext.getOrCreate(conf=conf)
spark = SparkSession(sc)
rdd = sc.parallelize([('20170901',['TX','TX','CA','TX']), ('20170902', ['TX','CA','CA']), ('20170902',['TX']) ])
df = spark.createDataFrame(rdd, ["datatime", "actionlist"])
df = df.groupBy("datatime").agg(collect_list("actionlist").alias("actionlist"))

rdd = df.select("actionlist").rdd.map(lambda x:(x,1))#.reduceByKey(operator.add)
print (rdd.take(2))
#printed output: [(Row(actionlist=[['TX', 'CA', 'CA'], ['TX']]), 1 (Row(actionlist=[['TX', 'TX', 'CA', 'TX']]), 1)]
#for next step, it should look like:
#[(Row(actionlist=[('TX',1), ('CA',1), ('CA',1), ('TX',1)]), (Row(actionlist=[('TX',1), ('TX',1), ('CA',1), ('TX',1)])]

我想要的是这样的:

20170901,[('TX', 3), ('CA', 1 )]
20170902,[('TX', 2), ('CA', 2 )]

我认为第一步是拉平collect_list结果,我尝试过: udf(lambda x:list(chain.from_iterable(x)),StringType()) udf(lambda项:list(chain.from_iterable(itertools.repeat(x,1)如果isinstance(x,str)否则x表示项中的x)))) udf(lambda l:[l中的子列表项,子列表中的项])

I think first step is to flatten collect_list result, I've tried: udf(lambda x: list(chain.from_iterable(x)), StringType()) udf(lambda items: list(chain.from_iterable(itertools.repeat(x,1) if isinstance(x,str) else x for x in items))) udf(lambda l: [item for sublist in l for item in sublist])

但是还没有运气,下一步是化妆KV对并减少,我在这里停留了一段时间,有没有火花专家对逻辑有所帮助?感谢您的帮助!

but no luck yet, next step is to makeup KV pairs and do reduce, I stuck here for a while, can any spark expert help on the logic? appreciate you help!

推荐答案

您可以在udf中使用reduce和counter来实现它.我尝试了一下,希望对您有所帮助.

You can use reduce and counter in udf to achieve it. I tried my way, hope this helps.

>>> from functools import reduce
>>> from collections import Counter
>>> from pyspark.sql.types import *
>>> from pyspark.sql import functions as F
>>> rdd = sc.parallelize([('20170901',['TX','TX','CA','TX']), ('20170902', ['TX','CA','CA']), ('20170902',['TX']) ])
>>> df = spark.createDataFrame(rdd, ["datatime", "actionlist"])
>>> df = df.groupBy("datatime").agg(F.collect_list("actionlist").alias("actionlist"))
>>> def someudf(row):
        value = reduce(lambda x,y:x+y,row)
        return Counter(value).most_common()

>>> schema = ArrayType(StructType([
    StructField("char", StringType(), False),
    StructField("count", IntegerType(), False)]))

>>> udf1 = F.udf(someudf,schema)
>>> df.select('datatime',udf1(df.actionlist)).show(2,False)
+--------+-------------------+
|datatime|someudf(actionlist)|
+--------+-------------------+
|20170902|[[TX,2], [CA,2]]   |
|20170901|[[TX,3], [CA,1]]   |
+--------+-------------------+

这篇关于在列上collect_list之后的PySpark reduceByKey聚合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆