Spark SQL:在数组值上使用collect_set? [英] Spark SQL: using collect_set over array values?

查看:31
本文介绍了Spark SQL:在数组值上使用collect_set?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个聚合的 DataFrame,其中有一列使用 collect_set 创建.我现在需要再次聚合此 DataFrame,并再次将 collect_set 应用于该列的值.问题是我需要对集​​合的值应用 collect_Set - 并且我看到的唯一方法是分解聚合的 DataFrame.有没有更好的办法?

I have an aggregated DataFrame with a column created using collect_set. I now need to aggregate over this DataFrame again, and apply collect_set to the values of that column again. The problem is that I need to apply collect_Set ver the values of the sets - and do far the only way I see how to do so is by exploding the aggregated DataFrame. Is there a better way?

示例:

初始数据帧:

country   | continent   | attributes
-------------------------------------
Canada    | America     | A
Belgium   | Europe      | Z
USA       | America     | A
Canada    | America     | B
France    | Europe      | Y
France    | Europe      | X

Aggregated DataFrame(我作为输入接收的那个) - 在 country 上聚合:

Aggregated DataFrame (the one I receive as input) - aggregation over country:

country   | continent   | attributes
-------------------------------------
Canada    | America     | A, B
Belgium   | Europe      | Z
USA       | America     | A
France    | Europe      | Y, X

我想要的输出 - continent 上的聚合:

My desired output - aggregation over continent:

continent   | attributes
-------------------------------------
America     | A, B
Europe      | X, Y, Z

推荐答案

由于此时您只能拥有少量行,因此您只需按原样收集属性并将结果展平 (Spark >= 2.4)

Since you can have only a handful of rows at this point, you just collect attributes as-is and flatten the result (Spark >= 2.4)

import org.apache.spark.sql.functions.{collect_set, flatten, array_distinct}

val byState = Seq(
  ("Canada", "America", Seq("A", "B")),
  ("Belgium", "Europe", Seq("Z")),
  ("USA", "America", Seq("A")),
  ("France", "Europe", Seq("Y", "X"))
).toDF("country", "continent", "attributes")

byState
  .groupBy("continent")
  .agg(array_distinct(flatten(collect_set($"attributes"))) as "attributes")
  .show

+---------+----------+
|continent|attributes|
+---------+----------+
|   Europe| [Y, X, Z]|
|  America|    [A, B]|
+---------+----------+

在一般情况下,事情更难处理,并且在许多情况下,如果您期望大型列表,每个组中有许多重复项和许多值,最佳解决方案*是从头开始重新计算结果,即

In general case things are much harder to handle, and in many cases, if you expect large lists, with many duplicates and many values per group, the optimal solution* is to just recompute results from scratch, i.e.

input.groupBy($"continent").agg(collect_set($"attributes") as "attributes")

一种可能的替代方法是使用 Aggregator

One possible alternative is to use Aggregator

import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
import org.apache.spark.sql.{Encoder, Encoders}
import scala.collection.mutable.{Set => MSet}


class MergeSets[T, U](f: T => Seq[U])(implicit enc: Encoder[Seq[U]]) extends 
     Aggregator[T, MSet[U], Seq[U]] with Serializable {

  def zero = MSet.empty[U]

  def reduce(acc: MSet[U], x: T) = {
    for { v <- f(x) } acc.add(v)
    acc
  }

  def merge(acc1: MSet[U], acc2: MSet[U]) = {
    acc1 ++= acc2
  }

  def finish(acc: MSet[U]) = acc.toSeq
  def bufferEncoder: Encoder[MSet[U]] = Encoders.kryo[MSet[U]]
  def outputEncoder: Encoder[Seq[U]] = enc

}

并按如下方式应用

case class CountryAggregate(
  country: String, continent: String, attributes: Seq[String])

byState
  .as[CountryAggregate]
  .groupByKey(_.continent)
  .agg(new MergeSets[CountryAggregate, String](_.attributes).toColumn)
  .toDF("continent", "attributes")
  .show

+---------+----------+
|continent|attributes|
+---------+----------+
|   Europe| [X, Y, Z]|
|  America|    [B, A]|
+---------+----------+

但这显然不是一个对 Java 友好的选项.

but that's clearly not a Java-friendly option.

另请参阅如何在 groupBy 之后将值聚合到集合中?(类似,但没有唯一性约束).

See also How to aggregate values into collection after groupBy? (similar, but without uniqueness constraint).

* 那是因为 explode 可能非常昂贵,尤其是在较旧的 Spark 版本中,与访问 SQL 集合的外部表示相同.

* That's because explode can be quite expensive, especially in older Spark versions, same as access to external representation of SQL collections.

这篇关于Spark SQL:在数组值上使用collect_set?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆