Spark Dataframe 嵌套 Case When 语句 [英] Spark Dataframe Nested Case When Statement

查看:159
本文介绍了Spark Dataframe 嵌套 Case When 语句的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要在 Spark DataFrame

I need to implement the below SQL logic in Spark DataFrame

SELECT KEY,
    CASE WHEN tc in ('a','b') THEN 'Y'
         WHEN tc in ('a') AND amt > 0 THEN 'N'
         ELSE NULL END REASON,
FROM dataset1;

我的输入DataFrame如下:

val dataset1 = Seq((66, "a", "4"), (67, "a", "0"), (70, "b", "4"), (71, "d", "4")).toDF("KEY", "tc", "amt")

dataset1.show()

+---+---+---+
|KEY| tc|amt|
+---+---+---+
| 66|  a|  4|
| 67|  a|  0|
| 70|  b|  4|
| 71|  d|  4|
+---+---+---+

我已将嵌套 case when 语句实现为:

I have implement the nested case when statement as:

dataset1.withColumn("REASON", when(col("tc").isin("a", "b"), "Y")
  .otherwise(when(col("tc").equalTo("a") && col("amt").geq(0), "N")
    .otherwise(null))).show()

+---+---+---+------+
|KEY| tc|amt|REASON|
+---+---+---+------+
| 66|  a|  4|     Y|
| 67|  a|  0|     Y|
| 70|  b|  4|     Y|
| 71|  d|  4|  null|
+---+---+---+------+

如果嵌套的 when 语句更进一步,上述带有 "otherwise" 语句的逻辑的可读性会有点混乱.

Readability of the above logic with "otherwise" statement is little messy if the nested when statements goes further.

在 Spark DataFrames 中,有没有更好的方法来实现嵌套 case when 语句?

Is there any better way of implementing nested case when statements in Spark DataFrames?

推荐答案

这里没有嵌套,因此不需要otherwise.所有你需要的是链when:

There is no nesting here, therefore there is no need for otherwise. All you need is chained when:

import spark.implicits._

when($"tc" isin ("a", "b"), "Y")
  .when($"tc" === "a" && $"amt" >= 0, "N")

ELSE NULL 是隐式的,所以你可以完全省略它.

ELSE NULL is implicit so you can omit it completely.

您使用的模式,更适用于数据结构上的折叠:

Pattern you use, is more more applicable for folding over a data structure:

val cases = Seq(
  ($"tc" isin ("a", "b"), "Y"),
  ($"tc" === "a" && $"amt" >= 0, "N")
)

where when - otherwise 自然遵循递归模式,null 提供基本情况.

where when - otherwise naturally follows recursion pattern and null provides the base case.

cases.foldLeft(lit(null)) {
  case (acc, (expr, value)) => when(expr, value).otherwise(acc)
}

请注意,在这一系列条件下,不可能达到N"个结果.如果 tc 等于a",它将被第一个子句捕获.如果不是,它将无法满足两个谓词并默认为 NULL.你应该:

Please note, that it is impossible to reach "N" outcome, with this chain of conditions. If tc is equal to "a" it will be captured by the first clause. If it is not, it will fail to satisfy both predicates and default to NULL. You should rather:

when($"tc" === "a" && $"amt" >= 0, "N")
 .when($"tc" isin ("a", "b"), "Y")

这篇关于Spark Dataframe 嵌套 Case When 语句的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆