根据Apache Spark中数组中的单词过滤DataFrame [英] Filter DataFrame based on words in array in Apache Spark

查看:155
本文介绍了根据Apache Spark中数组中的单词过滤DataFrame的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图通过仅获取包含数组中单词的行来过滤数据集. 我正在使用contains方法,它适用于字符串,但不适用于数组.下面是代码

I am trying to Filter a Dataset by getting only those rows that contains words in array. I am using contains method,it works for string but not working for array. Below is code

val dataSet = spark.read.option("header","true").option("inferschema","true").json(path).na.drop.cache()

val threats_path = spark.read.textFile("src/main/resources/cyber_threats").collect()

val newData = dataSet.select("*").filter(col("_source.raw_text").contains(threats_path)).show()

它不起作用,因为Threats_path是字符串数组,并且包含字符串的工作.任何帮助将不胜感激.

It is not working becuase threats_path is array of strings and contains work for string. Any help would be appreciated.

推荐答案

您可以在列上使用isin udf

You can use isin udf on columns

它会像这样

val threats_path = spark.read.textFile("src/main/resources/cyber_threats").collect()

val dataSet = ???

dataSet.where(col("_source.raw_text").isin(thread_path: _*))

请注意,如果thread_paths的大小较大,这将对性能产生影响,这不仅是因为collect还是由于使用了isin的过滤器.

Note if the size of thread_paths is big, this will have performance impact both because of collect and because of filter using isin.

我建议您使用join将过滤器dataSetthreats_path一起使用.它会像这样,

I'll suggest you to use filter dataSet with threats_path using join. It will go something like,

val dataSet = spark.read.option("header","true").option("inferschema","true").json(path).na.drop

val threats_path = spark.read.textFile("src/main/resources/cyber_threats")

val newData = threats_path.join(dataSet, col("_source.raw_text") === col("<col in threats_path >"), "leftouter").show()

希望这会有所帮助

这篇关于根据Apache Spark中数组中的单词过滤DataFrame的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆