在apache spark SQL中,如何在窗口函数中使用collect_list时删除重复的行? [英] In apache spark SQL, how to remove the duplicate rows when using collect_list in window function?

查看:203
本文介绍了在apache spark SQL中,如何在窗口函数中使用collect_list时删除重复的行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在下面的数据框中,

+----+-----+----+--------+
|year|month|item|quantity|
+----+-----+----+--------+
|2019|1    |TV  |8       |
|2019|2    |AC  |10      |
|2018|1    |TV  |2       |
|2018|2    |AC  |3       |
+----+-----+----+--------+

通过使用窗口函数,我想获得下面的输出,

by using window function I wanted to get below output,

val partitionWindow = Window.partitionBy("year").orderBy("month")
val itemsList= collect_list(struct("item", "quantity")).over(partitionWindow)

df.select("year", itemsList as "items")

Expected output:
+----+-------------------+
|year|items              |
+----+-------------------+
|2019|[[TV, 8], [AC, 10]]|
|2018|[[TV, 2], [AC, 3]] |
+----+-------------------+

但是,当我使用窗口功能时,每个项目都有重复的行,

But, when I use window function, there are duplicate rows for each item,

Current output:
+----+-------------------+
|year|items              |
+----+-------------------+
|2019|[[TV, 8]]          |
|2019|[[TV, 8], [AC, 10]]|
|2018|[[TV, 2]]          |
|2018|[[TV, 2], [AC, 3]] |
+----+-------------------+

我想知道哪种是删除重复行的最佳方法?

I wanted to know which is best way to remove the duplicate rows?

推荐答案

我相信这里有趣的部分是,汇总的项目列表将按月份排序.所以我用以下三种方法编写了代码:

I believe the interesting part here is that the aggregated list of items is to be sorted by month. So I've written code in three approaches as :

创建样本数据集:

import org.apache.spark.sql._
import org.apache.spark.sql.functions._
case class data(year : Int, month : Int, item : String, quantity : Int)
val spark = SparkSession.builder().master("local").getOrCreate()
import spark.implicits._
val inputDF = spark.createDataset(Seq(
    data(2018, 2, "AC", 3),
    data(2019, 2, "AC", 10),
    data(2019, 1, "TV", 2),
    data(2018, 1, "TV", 2)
    )).toDF()

方法1:将月份,项目和数量汇总到列表中,然后使用UDF按月对项目进行排序,

Approach1: Aggregating month, item and quantiy into list and then sorting the items by month using UDF as:

case class items(item : String, quantity : Int)
def getItemsSortedByMonth(itemsRows : Seq[Row]) : Seq[items] = {
    if (itemsRows == null || itemsRows.isEmpty) {
      null
    }
    else {
      itemsRows.sortBy(r => r.getAs[Int]("month"))
      .map(r => items(r.getAs[String]("item"), r.getAs[Int]("quantity")))
    }
  }
val itemsSortedByMonthUDF = udf(getItemsSortedByMonth(_: Seq[Row]))
val outputDF = inputDF.groupBy(col("year"))
    .agg(collect_list(struct("month", "item", "quantity")).as("items"))
    .withColumn("items", itemsSortedByMonthUDF(col("items")))

方法2:使用窗口功能

val monthWindowSpec = Window.partitionBy("year").orderBy("month")
       val rowNumberWindowSpec = Window.partitionBy("year").orderBy("row_number")
        val runningList = collect_list(struct("item", "quantity")). over(rowNumberWindowSpec)
    val tempDF = inputDF
      // using row_number for continuous ranks if there are multiple items in the same month
      .withColumn("row_number", row_number().over(monthWindowSpec))
      .withColumn("items", runningList)
    .drop("month", "item", "quantity")

    tempDF.persist()
    val yearToSelect = tempDF.groupBy("year").agg(max("row_number").as("row_number"))

    val outputDF = tempDF.join(yearToSelect, Seq("year", "row_number")).drop("row_number")

使用数据集API的后代添加了第三种方法-groupByKey和mapGroups:

Added the third approach for posterity using Dataset API's - groupByKey and mapGroups:

//encoding to data class can be avoided if inputDF is not converted dataset of row objects
val outputDF = inputDF.as[data].groupByKey(_.year).mapGroups{ case (year, rows) =>
      val itemsSortedByMonth = rows.toSeq.sortBy(_.month).map(s => items(s.item, s.quantity))
      (year, itemsSortedByMonth)
    }.toDF("year", "items")

这篇关于在apache spark SQL中,如何在窗口函数中使用collect_list时删除重复的行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆