根据另一个指定拒绝列表条件的DataFrame过滤Spark DataFrame [英] Filter Spark DataFrame based on another DataFrame that specifies denylist criteria
问题描述
我有一个largeDataFrame
(多列和数十亿行)和一个smallDataFrame
(单列和10,000行).
I have a largeDataFrame
(multiple columns and billions of rows) and a smallDataFrame
(single column and 10,000 rows).
每当largeDataFrame
中的some_identifier
列与smallDataFrame
中的行之一匹配时,我想过滤largeDataFrame
中的所有行.
I'd like to filter all the rows from the largeDataFrame
whenever the some_identifier
column in the largeDataFrame
matches one of the rows in the smallDataFrame
.
这是一个例子:
largeDataFrame
largeDataFrame
some_idenfitier,first_name
111,bob
123,phil
222,mary
456,sue
smallDataFrame
smallDataFrame
some_identifier
123
456
desiredOutput
desiredOutput
111,bob
222,mary
这是我的丑陋解决方案.
Here is my ugly solution.
val smallDataFrame2 = smallDataFrame.withColumn("is_bad", lit("bad_row"))
val desiredOutput = largeDataFrame.join(broadcast(smallDataFrame2), Seq("some_identifier"), "left").filter($"is_bad".isNull).drop("is_bad")
有更清洁的解决方案吗?
Is there a cleaner solution?
推荐答案
在这种情况下,您将需要使用left_anti
连接.
You'll need to use a left_anti
join in this case.
左反连接与左半连接相反.
它根据给定的键从左表的右表中筛选出数据:
It filters out data from the right table in the left table according to a given key :
largeDataFrame
.join(smallDataFrame, Seq("some_identifier"),"left_anti")
.show
// +---------------+----------+
// |some_identifier|first_name|
// +---------------+----------+
// | 222| mary|
// | 111| bob|
// +---------------+----------+
这篇关于根据另一个指定拒绝列表条件的DataFrame过滤Spark DataFrame的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!