在Spark Scala的dataframe列中过滤NULL值 [英] Filter NULL value in dataframe column of spark scala

查看:171
本文介绍了在Spark Scala的dataframe列中过滤NULL值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个具有以下内容的数据框(df):

I have a dataframe(df) with following:

+---------+--------------------+
|  col1|        col2           |
+---------+--------------------+
|colvalue1|                NULL|   
|colvalue2|col2value...        |
+------------+-----------------+

我正在尝试根据col2如下过滤行

I am trying to filter rows based on the col2 as follows

df.filter(($"col2".isNotNULL) || ($"col2" !== "NULL")  || ($"col2" !== "null")  || ($"col2".trim !== "NULL"))

但是具有NULL的行未过滤.此列显示 nullable = true .

But the row which has NULL is not filtering. This column show nullable=true.

有人可以让我知道我在做什么错误吗?我正在使用Spark 1.6.

Can anyone let me know what mistake I am doing? I am using Spark 1.6.

推荐答案

您的!== 表示法错误,应该为 =!= ,您不能请执行 $"col2" .trim ,因为您对 ||| 使用了否定,否定始终是正确的.在您的示例中,($"col2" .isNotNULL)始终为true,因此每行都被过滤.因此,应小心使用由 ||| 组合的单个否定项.

Your !== notation is wrong which should be =!=, and you can't do $"col2".trim and since you have used negations with ||, one of the negation is always true. In your example ($"col2".isNotNULL) is always true so every rows are filtered-in. So individual negation combined by || should be taken with care.

所以正确的形式是

df.filter(!($"col2".isNull || ($"col2" === "NULL") || ($"col2" === "null")))

甚至更好,如果您使用内置函数 isnull trim

or even better if you use inbuilt function isnull and trim

df.filter(!(isnull($"col2") || (trim($"col2") === "NULL") || (trim($"col2") === "null")))

这篇关于在Spark Scala的dataframe列中过滤NULL值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆