Spark Python-如何使用reduce by key获得最小值/最大值 [英] Spark Python - how to use reduce by key to get minmum/maximum values

查看:630
本文介绍了Spark Python-如何使用reduce by key获得最小值/最大值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我以csv格式获取了一些城市的最高和最低温度的样本数据.

I have a sample data of maximum and minimum temperatures of some cities in csv format.

Mumbai,19,30
Delhi,5,41
Kolkata,20,40
Mumbai,18,35
Delhi,4,42
Delhi,10,44
Kolkata,19,39

我想使用Python中的Spark脚本找出每个城市记录的所有时间以来的最低气温.

这是我的剧本

cityTemp = sc.textFile("weather.txt").map(lambda x: x.split(','))

# convert it to pair RDD for performing reduce by Key

cityTemp = cityTemp.map(lambda x: (x[0], tuple(x[1:])))

cityTempMin = cityTemp.reduceByKey(lambda x, y: min(x[0],y[0]))

cityTempMin.collect()

我的预期输出如下

Delhi, 4
Mumbai, 18
Kolkata, 19

但是脚本正在产生以下输出.

However the script is producing the following output.

[(u'Kolkata', u'19'), (u'Mumbai', u'18'), (u'Delhi', u'1')]

如何获得所需的输出?

推荐答案

如果必须使用reduceByKey函数,请尝试以下解决方案:

Try the below solution, if you have to use reduceByKey function :

SCALA:

  val df = sc.parallelize(Seq(("Mumbai", 19, 30),
    ("Delhi", 5, 41),
    ("Kolkata", 20, 40),
    ("Mumbai", 18, 35),
    ("Delhi", 4, 42),
    ("Delhi", 10, 44),
    ("Kolkata", 19, 39))).map(x => (x._1,x._2)).keyBy(_._1)


    df.reduceByKey((accum, n) => if (accum._2 > n._2) n else  accum).map(_._2).collect().foreach(println)

PYTHON:

rdd = sc.parallelize([("Mumbai", 19, 30),
    ("Delhi", 5, 41),
    ("Kolkata", 20, 40),
    ("Mumbai", 18, 35),
    ("Delhi", 4, 42),
    ("Delhi", 10, 44),
    ("Kolkata", 19, 39)])

def reduceFunc(accum, n):
    print(accum, n)
    if accum[1] > n[1]:
        return(n)
    else: return(accum)

def mapFunc(lines):
    return (lines[0], lines[1])

rdd.map(mapFunc).keyBy(lambda x: x[0]).reduceByKey(reduceFunc).map(lambda x : x[1]).collect()

输出:

(Kolkata,19)
(Delhi,4)
(Mumbai,18)

如果您不想执行reduceByKey.只需一组后接min函数,即可获得所需的结果.

If you don't want to do a reduceByKey. Just a group by followed by min function would give you desired result.

val df = sc.parallelize(Seq(("Mumbai", 19, 30),
        ("Delhi", 5, 41),
        ("Kolkata", 20, 40),
        ("Mumbai", 18, 35),
        ("Delhi", 4, 42),
        ("Delhi", 10, 44),
        ("Kolkata", 19, 39))).toDF("city", "minTemp", "maxTemp")

        df.groupBy("city").agg(min("minTemp")).show

输出:

+-------+------------+
|   city|min(minTemp)|
+-------+------------+
| Mumbai|          18|
|Kolkata|          19|
|  Delhi|           4|
+-------+------------+

这篇关于Spark Python-如何使用reduce by key获得最小值/最大值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆