分组并在 spark sql 中获取第一个值 [英] group by and picking up first value in spark sql

查看:131
本文介绍了分组并在 spark sql 中获取第一个值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 spark sql 中按操作进行分组.因为某些行包含具有不同 ID 的相同值.在这种情况下,我想选择第一行.

I am doing group by action in spark sql.In that some rows contain same value with different ID.In that case I want to select first row.

这是我的代码.

    val highvalueresult = highvalue.select($"tagShortID", $"Timestamp", $"ListenerShortID", $"rootOrgID", $"subOrgID",  $"RSSI_Weight_avg")
                          .groupBy("tagShortID", "Timestamp").agg(max($"RSSI_Weight_avg")
                          .alias("RSSI_Weight_avg"))

        val t2 = averageDF.join(highvalueresult, Seq("tagShortID", "Timestamp", "RSSI_Weight_avg"))

这是我的结果.

tag,timestamp,rssi,listner,rootorg,suborg
2,1496745906,0.7,3878,4,3
4,1496745907,0.6,362,4,3
4,1496745907,0.6,718,4,3
4,1496745907,0.6,1901,4,3

在上面的结果中,三个监听器的时间戳 1496745907 的 rssi 值相​​同.在这种情况下,我想选择第一行.

In the above result for the time stamp 1496745907 same rssi values for three listner.In this case I want to select the first row.

推荐答案

你可以使用spark sql context具有的窗口函数支持假设您的数据框是:

You can use the windowing functions support that spark sql context has Assuming you dataframe is:

+---+----------+----+-------+-------+------+
|tag| timestamp|rssi|listner|rootorg|suborg|
+---+----------+----+-------+-------+------+
|  2|1496745906| 0.7|   3878|      4|     3|
|  4|1496745907| 0.6|    362|      4|     3|
|  4|1496745907| 0.6|    718|      4|     3|
|  4|1496745907| 0.6|   1901|      4|     3|
+---+----------+----+-------+-------+------+

将窗口函数定义为(您可以按列进行分区/排序):

Define a window function as(you can partition by/order by your columns):

val window = Window.partitionBy("timestamp", "rssi").orderBy("timestamp")

应用窗口函数:

res1.withColumn("rank", row_number().over(window))
+---+----------+----+-------+-------+------+----+
|tag| timestamp|rssi|listner|rootorg|suborg|rank|
+---+----------+----+-------+-------+------+----+
|  4|1496745907| 0.6|    362|      4|     3|   1|
|  4|1496745907| 0.6|    718|      4|     3|   2|
|  4|1496745907| 0.6|   1901|      4|     3|   3|
|  2|1496745906| 0.7|   3878|      4|     3|   1|
+---+----------+----+-------+-------+------+----+

选择每个窗口的第一行

    res5.where($"rank" === 1)
+---+----------+----+-------+-------+------+----+
|tag| timestamp|rssi|listner|rootorg|suborg|rank|
+---+----------+----+-------+-------+------+----+
|  4|1496745907| 0.6|    362|      4|     3|   1|
|  2|1496745906| 0.7|   3878|      4|     3|   1|
+---+----------+----+-------+-------+------+----+

这篇关于分组并在 spark sql 中获取第一个值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆