如何在Scala的DataFrame中解析JSON列 [英] How to parse json column in dataframe in scala

查看:555
本文介绍了如何在Scala的DataFrame中解析JSON列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个数据框,它是带有json字符串的json列.下面的例子.一共有3列-a,b,c. c列是stringType

I have a data frame which is json column with json string. example below. There are 3 columns - a,b,c. Column c is stringType

| a         | b    |           c                       |
--------------------------------------------------------
|77         |ABC   |    {"12549":38,"333513":39}       |
|78         |ABC   |    {"12540":38,"333513":39}       |

我想将它们放入数据框(枢轴)的列中.下面的示例-

I want to make them into columns of the data frame(pivot). the example below -

| a         | b    | 12549  | 333513 | 12540
---------------------------------------------
|77         |ABC   |38      |39      | null
|77         |ABC   | null   |39      | 38

推荐答案

这可能不是最有效的,因为它必须读取所有json记录,因此需要额外的时间来推断模式.如果可以静态定义架构,它应该做得更好.

This may not be the most efficient, as it has to read all of the json records an extra time to infer the schema. If you can statically define the schema, it should do better.

val data = spark.createDataset(Seq(
  (77, "ABC", "{\"12549\":38,\"333513\":39}"),
  (78, "ABC", "{\"12540\":38,\"333513\":39}")
)).toDF("a", "b", "c")

val schema = spark.read.json(data.select("c").as[String]).schema

data.select($"a", $"b", from_json($"c", schema).as("s")).select("a", "b", "s.*").show(false)

结果:

+---+---+-----+-----+------+
|a  |b  |12540|12549|333513|
+---+---+-----+-----+------+
|77 |ABC|null |38   |39    |
|78 |ABC|38   |null |39    |
+---+---+-----+-----+------+

这篇关于如何在Scala的DataFrame中解析JSON列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆