PySpark sqlContext JSON查询数组的所有值 [英] PySpark sqlContext JSON query all values of an array

查看:308
本文介绍了PySpark sqlContext JSON查询数组的所有值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前有一个json文件,正在尝试使用sqlContext.sql()查询,如下所示:

I currently have a json file that i am trying to query with sqlContext.sql() that looks something like this:

{
  "sample": {
    "persons": [
      {
        "id": "123",
      },
      {
        "id": "456",
      }
    ]
  }
}

如果我只想输入第一个值,则输入:

If I just want the first value I would type:

sqlContext.sql("SELECT sample.persons[0] FROM test")

但是我想要所有"persons"的值而不必编写循环.循环只会消耗过多的处理能力,并且鉴于这些文件的大小,这将是不切实际的.

but I want all the values of "persons" without having to write a loop. Loops just consume too much processing power, and given the size of these files, that would just be impractical.

我以为我可以在[]括号内放置一个范围,但是我找不到用于执行此操作的任何语法.

I thought I would be able to put a range in the [] brackets but I can't find any syntax by which to do that.

推荐答案

如果您的模式如下:

root
 |-- sample: struct (nullable = true)
 |    |-- persons: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- id: string (nullable = true)

并想从persons数组访问单个structs,只需将其爆炸即可:

and want to access individual structs from persons array all you have to do is to explode it:

from pyspark.sql.functions import explode

df.select(explode("sample.persons").alias("person")).select("person.id")

另请参阅:查询具有复杂类型的Spark SQL DataFrame

这篇关于PySpark sqlContext JSON查询数组的所有值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆