在Spark SQL中按多列进行分区 [英] Partitioning by multiple columns in Spark SQL
本文介绍了在Spark SQL中按多列进行分区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
使用Spark SQL的窗口函数,我需要按多列进行分区以运行数据查询,如下所示:
With Spark SQL's window functions, I need to partition by multiple columns to run my data queries, as follows:
val w = Window.partitionBy($"a").partitionBy($"b").rangeBetween(-100, 0)
我目前没有测试环境(正在进行设置),但是作为一个简单的问题,Spark SQL窗口功能的一部分当前是否支持此功能,或者这将不起作用吗?
I currently do not have a test environment (working on settings this up), but as a quick question, is this currently supported as a part of Spark SQL's window functions, or will this not work?
推荐答案
这不起作用.第二个partitionBy
将覆盖第一个.必须在同一调用中指定两个分区列:
This won't work. The second partitionBy
will overwrite the first one. Both partition columns have to be specified in the same call:
val w = Window.partitionBy($"a", $"b").rangeBetween(-100, 0)
这篇关于在Spark SQL中按多列进行分区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文