PySpark DataFrames:过滤某些值在数组列中的位置 [英] PySpark DataFrames: filter where some value is in array column
问题描述
我在 PySpark 中有一个 DataFrame,它的字段之一具有嵌套数组值.我想过滤数组包含某个字符串的数据帧.我不知道我该怎么做.
I have a DataFrame in PySpark that has a nested array value for one of its fields. I would like to filter the DataFrame where the array contains a certain string. I'm not seeing how I can do that.
架构如下所示:<代码>根|-- 名称:字符串(可为空 = 真)|-- 姓氏:数组(可为空 = 真)||-- 元素:字符串(containsNull = false)
我想返回 upper(name) == 'JOHN'
和 lastName
列(数组)包含 'SMITH 的所有行'
和那里的相等性应该不区分大小写(就像我对名称所做的那样).我在列值上找到了 isin()
函数,但这似乎与我想要的相反.似乎我需要一个列值的 contains()
函数.任何人都对一种直接的方法有任何想法吗?
I want to return all the rows where the upper(name) == 'JOHN'
and where the lastName
column (the array) contains 'SMITH'
and the equality there should be case insensitive (like I did for the name). I found the isin()
function on a column value, but that seems to work backwards of what I want. It seem like I need a contains()
function on a column value. Anyone have any ideas for a straightforward way to do this?
推荐答案
2019 年更新
spark 2.4.0 引入了诸如 array_contains
和 transform
等新函数官方文档现在可以用sql语言完成
spark 2.4.0 introduced new functions like array_contains
and transform
official document
now it can be done in sql language
对于你的问题,应该是
dataframe.filter('array_contains(transform(lastName, x -> upper(x)), "JOHN")')
它比之前使用 RDD
作为桥接的解决方案更好,因为 DataFrame
操作比 RDD
操作要快得多.
It is better than the previous solution using RDD
as a bridge, because DataFrame
operations are much faster than RDD
ones.
这篇关于PySpark DataFrames:过滤某些值在数组列中的位置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!