星火:使用reduceByKey代替groupByKey和mapByValues [英] Spark: use reduceByKey instead of groupByKey and mapByValues

查看:1726
本文介绍了星火:使用reduceByKey代替groupByKey和mapByValues的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个RDD与以下格式重复值:

I have an RDD with duplicates values with the following format:

[ {key1: A}, {key1: A}, {key1: B}, {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..]

我想新RDD有以下的输出,并获得重复的车程。

I would like the new RDD to have the following output and to get ride of duplicates.

[ {key1: [A,B,C]}, {key2: [B,D]}, ..]

我有把一组值获得重复的车程管理与以下code做到这一点。

I have manage to do this with the following code by putting the values in a set to get ride of duplicates.

RDD_unique = RDD_duplicates.groupByKey().mapValues(lambda x: set(x))

不过,我想做到这一点更优雅的1命令

But I am trying to achieve this more elegantly in 1 command with

RDD_unique = RDD_duplicates.reduceByKey(...)

我没有设法拿出一个lambda函数,让我同样的结果在reduceByKey功能。

I have not managed to come up with a lambda function that gets me the same result in the reduceByKey function.

推荐答案

您可以做到这一点是这样的:

You can do it like this:

data = (sc.parallelize([ {key1: A}, {key1: A}, {key1: B},
  {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..]))

result = (data
  .mapValues(lambda x: {x})
  .reduceByKey(lambda s1, s2: s1.union(s2)))

这篇关于星火:使用reduceByKey代替groupByKey和mapByValues的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆