Spark Dataframe reducebykey之类的操作 [英] Spark dataframe reducebykey like operation
问题描述
我有一个包含以下数据的Spark数据帧(我使用spark-csv加载数据):
I have a Spark dataframe with the following data (I use spark-csv to load the data in):
key,value
1,10
2,12
3,0
1,20
有什么类似于spark RDD reduceByKey
的东西,它可以将Spark DataFrame返回为:(基本上是对相同的键值求和)
is there anything similar to spark RDD reduceByKey
which can return a Spark DataFrame as: (basically, summing up for the same key values)
key,value
1,30
2,12
3,0
(我可以将数据转换为RDD并执行reduceByKey
操作,但是还有更多的Spark DataFrame API方式可以做到这一点吗?)
(I can transform the data to RDD and do a reduceByKey
operation, but is there a more Spark DataFrame API way to do this?)
推荐答案
如果您不在乎列名,则可以使用groupBy
,然后使用sum
:
If you don't care about column names you can use groupBy
followed by sum
:
df.groupBy($"key").sum("value")
否则,最好将sum
替换为agg
:
df.groupBy($"key").agg(sum($"value").alias("value"))
最后,您可以使用原始SQL:
Finally you can use raw SQL:
df.registerTempTable("df")
sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")
这篇关于Spark Dataframe reducebykey之类的操作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!