Spark Dataframe groupBy以序列作为键参数 [英] Spark Dataframe groupBy with sequence as keys arguments
问题描述
我有一个spark dataFrame,我想通过多个键来聚合值
I have a spark dataFrame and I want to aggregate values by multiple keys
正如Spark文档所建议的:
As spark documentation suggests:
def groupBy(col1:String,cols:String *):GroupedData将 DataFrame使用指定的列,因此我们可以对其进行汇总
def groupBy(col1: String, cols: String*): GroupedData Groups the DataFrame using the specified columns, so we can run aggregation on them
所以我要做以下
val keys = Seq("a", "b", "c")
dataframe.groupBy(keys:_*).agg(...)
Intellij Idea引发以下错误:
Intellij Idea throws me following errors:
- 扩展非重复参数
- 类型不匹配:预期的Seq [Column],实际的Seq [String]
但是,我可以手动传递多个参数而不会出现错误:
However, I can pass multiple arguments manually without errors:
dataframe.groupBy("a", "b", "c").agg(...)
所以,我的问题是:如何以编程方式做到这一点?
So, my question is: How can I do this programmatically?
推荐答案
将列与groupBy(cols: Column*)
import org.apache.spark.sql.functions.col
val keys = Seq("a", "b", "c").map(col(_))
dataframe.groupBy(keys:_*).agg(...)
或head
/tail
与groupBy(col1: String, cols: String*)
:
val keys = Seq("a", "b", "c")
dataframe.groupBy(keys.head, keys.tail: _*).agg(...)
这篇关于Spark Dataframe groupBy以序列作为键参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!