如何在带有 Spark 的 Scala 中使用 countDistinct? [英] How to use countDistinct in Scala with Spark?

查看:27
本文介绍了如何在带有 Spark 的 Scala 中使用 countDistinct?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试使用 countDistinct 函数,根据 DataBrick 的博客.但是,我得到了以下异常:

I've tried to use countDistinct function which should be available in Spark 1.5 according to DataBrick's blog. However, I got the following exception:

Exception in thread "main" org.apache.spark.sql.AnalysisException: undefined function countDistinct;

我发现在 Spark 开发人员的邮件列表 他们建议使用 countdistinct 函数来获得与 countDistinct 应该产生的结果相同的结果:

I've found that on Spark developers' mail list they suggest using count and distinct functions to get the same result which should be produced by countDistinct:

count(distinct <columnName>)
// Instead
countDistinct(<columnName>)

因为我从聚合函数的名称列表动态构建聚合表达式,所以我希望没有任何需要不同处理的特殊情况.

Because I build aggregation expressions dynamically from the list of the names of aggregation functions I'd prefer to don't have any special cases which require different treating.

那么,是否可以通过以下方式统一它:

So, is it possible to unify it by:

  • 注册新的 UDAF,它将成为 count(distinct columnName)
  • 的别名
  • 手动注册已经在 Spark CountDistinct 函数中实现,这可能来自以下导入:

  • registering new UDAF which will be an alias for count(distinct columnName)
  • registering manually already implemented in Spark CountDistinct function which is probably one from following import:

导入 org.apache.spark.sql.catalyst.expressions.{CountDistinctFunction, CountDistinct}

import org.apache.spark.sql.catalyst.expressions.{CountDistinctFunction, CountDistinct}

还是以其他方式进行?

示例(删除了一些本地引用和不必要的代码):

Example (with removed some local references and unnecessary code):

import org.apache.spark.SparkContext
import org.apache.spark.sql.{Column, SQLContext, DataFrame}
import org.apache.spark.sql.functions._

import scala.collection.mutable.ListBuffer


class Flattener(sc: SparkContext) {
  val sqlContext = new SQLContext(sc)

  def flatTable(data: DataFrame, groupField: String): DataFrame = {
    val flatteningExpressions = data.columns.zip(TypeRecognizer.getTypes(data)).
      flatMap(x => getFlatteningExpressions(x._1, x._2)).toList

    data.groupBy(groupField).agg (
      expr(s"count($groupField) as groupSize"),
      flatteningExpressions:_*
    )
  }

  private def getFlatteningExpressions(fieldName: String, fieldType: DType): List[Column] = {
    val aggFuncs = getAggregationFunctons(fieldType)

    aggFuncs.map(f => expr(s"$f($fieldName) as ${fieldName}_$f"))
  }

  private def getAggregationFunctons(fieldType: DType): List[String] = {
    val aggFuncs = new ListBuffer[String]()

    if(fieldType == DType.NUMERIC) {
      aggFuncs += ("avg", "min", "max")
    }

    if(fieldType == DType.CATEGORY) {
      aggFuncs += "countDistinct"
    }

    aggFuncs.toList
  }

}

推荐答案

countDistinct 可以以两种不同的形式使用:

countDistinct can be used in two different forms:

df.groupBy("A").agg(expr("count(distinct B)")

df.groupBy("A").agg(countDistinct("B"))

但是,当您想将它们与自定义 UDAF(在 Spark 1.5 中作为 UserDefinedAggregateFunction 实现)在同一列上使用它们时,这两种方法都不起作用:

However, neither of these methods work when you want to use them on the same column with your custom UDAF (implemented as UserDefinedAggregateFunction in Spark 1.5):

// Assume that we have already implemented and registered StdDev UDAF 
df.groupBy("A").agg(countDistinct("B"), expr("StdDev(B)"))

// Will cause
Exception in thread "main" org.apache.spark.sql.AnalysisException: StdDev is implemented based on the new Aggregate Function interface and it cannot be used with functions implemented based on the old Aggregate Function interface.;

由于这些限制,看起来最合理的是将 countDistinct 实现为 UDAF,它应该允许以相同的方式处理所有函数以及将 countDistinct 与其他 UDAF 一起使用.

Due to these limitation it looks that the most reasonable is implementing countDistinct as a UDAF what should allow to treat all functions in the same way as well as use countDistinct along with other UDAFs.

示例实现如下所示:

import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._

class CountDistinct extends UserDefinedAggregateFunction{
  override def inputSchema: StructType = StructType(StructField("value", StringType) :: Nil)

  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
    buffer(0) = (buffer.getSeq[String](0).toSet + input.getString(0)).toSeq
  }

  override def bufferSchema: StructType = StructType(
      StructField("items", ArrayType(StringType, true)) :: Nil
  )

  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
    buffer1(0) = (buffer1.getSeq[String](0).toSet ++ buffer2.getSeq[String](0).toSet).toSeq
  }

  override def initialize(buffer: MutableAggregationBuffer): Unit = {
    buffer(0) = Seq[String]()
  }

  override def deterministic: Boolean = true

  override def evaluate(buffer: Row): Any = {
    buffer.getSeq[String](0).length
  }

  override def dataType: DataType = IntegerType
}

这篇关于如何在带有 Spark 的 Scala 中使用 countDistinct?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆