如何根据相等性检查在Spark中使用内部数组查询嵌套json [英] How to query nested json with internal arrays in Spark on basis of equality check

查看:142
本文介绍了如何根据相等性检查在Spark中使用内部数组查询嵌套json的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将一个嵌套的json结构加载到spark的数据框中.它包含几层数组,我试图弄清楚如何通过内部数组中的值查询此结构.

I have a nested json structure loaded into a dataframe in spark. It contains several layers of arrays and I'm trying to figure out how to query this structure by values in the internal arrays.

示例:考虑以下结构(directors.json文件)

Example: consider the following structure (directors.json file)

[
  {
    "director": "Steven Spielberg",
    "films": [
      {
        "name": "E.T",
        "actors": ["Henry Thomas", "Drew Barrymore"]
      },
      {
        "name": "The Goonies",
        "actors": ["Sean Astin", "Josh Brolin"]
      }
    ]
  },
  {
    "director": "Quentin Tarantino",
    "films": [
      {
        "name": "Pulp Fiction",
        "actors": ["John Travolta", "Samuel L. Jackson"]
      },
      {
        "name": "Kill Bill: Vol. 1",
        "actors": ["Uma Thurman", "Daryl Hannah"]
      }
    ]
  }
]

让我们说我想运行一个查询,该查询将返回特定演员参与的所有电影.类似这样的东西:

Lets say I want to run a query that will return all the films that a specific actor has participated in. something like this:

val directors = spark.read.json("directors.json")
directors.select($"films.name").where($"films.actors" === "Henry Thomas")

当我在spark shell中运行此程序时,出现异常:

When I run this in the spark shell I get an exception:

org.apache.spark.sql.AnalysisException: cannot resolve '(`films`.`actors` = 'Henry Thomas')' due to data type mismatch: differing types in '(`films`.`actors` = 'Henry Thomas')' (array<array<string>> and string).;;
'Project [name#128]
+- 'Filter (films#92.actors = Henry Thomas)
   +- AnalysisBarrier
         +- Project [films#92.name AS name#128, films#92]
            +- Relation[director#91,films#92] json

如何正确进行此类查询?

How do I properly make such a query?

有其他选择吗?如果是这样,优点和缺点是什么?

Are there different alternatives? If So, what are the pros and cons?

谢谢

修改

@thebluephantom仍然不起作用.得到类似的例外. 我认为这是因为我在另一个数组中有一个数组.例外:

@thebluephantom this still doesn't work. getting similar exception. I think it's because I have an array within another array. This is the exception:

org.apache.spark.sql.AnalysisException: cannot resolve 'array_contains(`films`.`actors`, 'Henry Thomas')' due to data type mismatch: Arguments must be an array followed by a value of same type as the array members;;
'Filter array_contains(films#7.actors, Henry Thomas)
+- AnalysisBarrier
      +- Project [director#6, films#7]
         +- Relation[director#6,films#7] json

推荐答案

尝试类似的方法,必须将电影数据分解,这意味着重复的演员组很容易归一化-否则我也无法使它正常工作-也许其他人可以:

Try something similar to this whereby the film data must be exploded which means the repeating group of actors is simple normalized - otherwise I cannot get it to work either - maybe someone else can:

使用SPARK 2.3.1更完整地处理数据,如下所示:

More complete using SPARK 2.3.1 as follows with your data:

val df = spark.read
   .option("multiLine", true).option("mode", "PERMISSIVE")
   .json("/FileStore/tables/films.txt")

val flattened = df.select($"director", explode($"films").as("films_flat"))
flattened.select ("*").where (array_contains (flattened("films_flat.actors"), "Henry Thomas")).show(false)

返回:

 +----------------+-------------------------------------+
 |director        |films_flat                           |
 +----------------+-------------------------------------+
 |Steven Spielberg|[[Henry Thomas, Drew Barrymore], E.T]|
 +----------------+-------------------------------------+

这篇关于如何根据相等性检查在Spark中使用内部数组查询嵌套json的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆