如何在Spark Scala中读取嵌套的JSON? [英] How to read a Nested JSON in Spark Scala?

查看:80
本文介绍了如何在Spark Scala中读取嵌套的JSON?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我的嵌套JSON文件.

Here is my Nested JSON file.

{
"dc_id": "dc-101",
"source": {
    "sensor-igauge": {
      "id": 10,
      "ip": "68.28.91.22",
      "description": "Sensor attached to the container ceilings",
      "temp":35,
      "c02_level": 1475,
      "geo": {"lat":38.00, "long":97.00}                        
    },
    "sensor-ipad": {
      "id": 13,
      "ip": "67.185.72.1",
      "description": "Sensor ipad attached to carbon cylinders",
      "temp": 34,
      "c02_level": 1370,
      "geo": {"lat":47.41, "long":-122.00}
    },
    "sensor-inest": {
      "id": 8,
      "ip": "208.109.163.218",
      "description": "Sensor attached to the factory ceilings",
      "temp": 40,
      "c02_level": 1346,
      "geo": {"lat":33.61, "long":-111.89}
    },
    "sensor-istick": {
      "id": 5,
      "ip": "204.116.105.67",
      "description": "Sensor embedded in exhaust pipes in the ceilings",
      "temp": 40,
      "c02_level": 1574,
      "geo": {"lat":35.93, "long":-85.46}
    }
  }
}

如何使用Spark Scala将JSON文件读取到Dataframe中.JSON文件中没有数组对象,因此无法使用explode.有人可以帮忙吗?

How can I read the JSON file into Dataframe with Spark Scala. There is no array object in the JSON file, so I can't use explode. Can anyone help?

推荐答案

val df = spark.read.option("multiline", true).json("data/test.json")

df
  .select(col("dc_id"), explode(array("source.*")) as "level1")
  .withColumn("id", col("level1.id"))
  .withColumn("ip", col("level1.ip"))
  .withColumn("temp", col("level1.temp"))
  .withColumn("description", col("level1.description"))
  .withColumn("c02_level", col("level1.c02_level"))
  .withColumn("lat", col("level1.geo.lat"))
  .withColumn("long", col("level1.geo.long"))
  .drop("level1")
  .show(false)

示例输出:

+------+---+---------------+----+------------------------------------------------+---------+-----+-------+
|dc_id |id |ip             |temp|description                                     |c02_level|lat  |long   |
+------+---+---------------+----+------------------------------------------------+---------+-----+-------+
|dc-101|10 |68.28.91.22    |35  |Sensor attached to the container ceilings       |1475     |38.0 |97.0   |
|dc-101|8  |208.109.163.218|40  |Sensor attached to the factory ceilings         |1346     |33.61|-111.89|
|dc-101|13 |67.185.72.1    |34  |Sensor ipad attached to carbon cylinders        |1370     |47.41|-122.0 |
|dc-101|5  |204.116.105.67 |40  |Sensor embedded in exhaust pipes in the ceilings|1574     |35.93|-85.46 |
+------+---+---------------+----+------------------------------------------------+---------+-----+-------+

您可以尝试编写一些通用的UDF来获取所有单独的列,而不是选择每一列.

Instead of selecting each column, you can try writing some generic UDF to get all the individual columns.

注意:已通过Spark 2.3测试

Note: Tested with Spark 2.3

这篇关于如何在Spark Scala中读取嵌套的JSON?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆