使用 Scala 获取 Spark 数据集中最新时间戳对应的行 [英] Get the row corresponding to the latest timestamp in a Spark Dataset using Scala
问题描述
我对 Spark 和 Scala 比较陌生.我有一个具有以下格式的数据框:
I am relatively new to Spark and Scala. I have a dataframe which has the following format:
| Col1 | Col2 | Col3 | Col_4 | Col_5 | Col_TS | Col_7 |
| 1234 | AAAA | 1111 | afsdf | ewqre | 1970-01-01 00:00:00.0 | false |
| 1234 | AAAA | 1111 | ewqrw | dafda | 2017-01-17 07:09:32.748 | true |
| 1234 | AAAA | 1111 | dafsd | afwew | 2015-01-17 07:09:32.748 | false |
| 5678 | BBBB | 2222 | afsdf | qwerq | 1970-01-01 00:00:00.0 | true |
| 5678 | BBBB | 2222 | bafva | qweqe | 2016-12-08 07:58:43.04 | false |
| 9101 | CCCC | 3333 | caxad | fsdaa | 1970-01-01 00:00:00.0 | false |
我需要做的是获取与最新时间戳对应的行.在上面的例子中,键是 Col1、Col2 和 Col3.Col_TS 表示时间戳,Col_7 是确定记录有效性的布尔值.我想要做的是找到一种方法,根据键对这些记录进行分组,并保留具有最新时间戳的记录.
What I need to do is to get the row that corresponds to the latest timestamp. In the example above, the keys are Col1, Col2 and Col3. Col_TS represents the timestamp and Col_7 is a boolean that determines the validity of the record. What I want to do is to find a way to group these records based on the keys and retain the one that has the latest timestamp.
所以上面dataframe中操作的输出应该是:
So the output of the operation in the dataframe above should be:
| Col1 | Col2 | Col3 | Col_4 | Col_5 | Col_TS | Col_7 |
| 1234 | AAAA | 1111 | ewqrw | dafda | 2017-01-17 07:09:32.748 | true |
| 5678 | BBBB | 2222 | bafva | qweqe | 2016-12-08 07:58:43.04 | false |
| 9101 | CCCC | 3333 | caxad | fsdaa | 1970-01-01 00:00:00.0 | false |
我想出了一个部分解决方案,但这样我只能返回记录分组的列键的数据框,而不是其他列.
I came up with a partial solution but this way I can only return the dataframe of the Column keys on which the records are grouped and not the other columns.
df = df.groupBy("Col1","Col2","Col3").agg(max("Col_TS"))
| Col1 | Col2 | Col3 | max(Col_TS) |
| 1234 | AAAA | 1111 | 2017-01-17 07:09:32.748 |
| 5678 | BBBB | 2222 | 2016-12-08 07:58:43.04 |
| 9101 | CCCC | 3333 | 1970-01-01 00:00:00.0 |
有人能帮我想出一个 Scala 代码来执行这个操作吗?
Can someone help me in coming up with a Scala code for performing this operation?
推荐答案
你可以使用window
函数如下
import org.apache.spark.sql.functions._
val windowSpec = Window.partitionBy("Col1","Col2","Col3").orderBy(col("Col_TS").desc)
df.withColumn("maxTS", first("Col_TS").over(windowSpec))
.select("*").where(col("maxTS") === col("Col_TS"))
.drop("maxTS")
.show(false)
你应该得到如下输出
+----+----+----+-----+-----+----------------------+-----+
|Col1|Col2|Col3|Col_4|Col_5|Col_TS |Col_7|
+----+----+----+-----+-----+----------------------+-----+
|5678|BBBB|2222|bafva|qweqe|2016-12-0807:58:43.04 |false|
|1234|AAAA|1111|ewqrw|dafda|2017-01-1707:09:32.748|true |
|9101|CCCC|3333|caxad|fsdaa|1970-01-0100:00:00.0 |false|
+----+----+----+-----+-----+----------------------+-----+
这篇关于使用 Scala 获取 Spark 数据集中最新时间戳对应的行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!