将pyspark数据帧转换为嵌套的json结构 [英] convert pyspark dataframe into nested json structure

查看:48
本文介绍了将pyspark数据帧转换为嵌套的json结构的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将下面的数据框转换为嵌套的 json(字符串)

I'm trying to convert below dataframe into nested json (string)

输入:

+---+---+-------+------+
| id|age| name  |number|
+---+---+-------+------+
|  1| 12|  smith|  uber|
|  2| 13|    jon| lunch|
|  3| 15|jocelyn|rental|
|  3| 15|  megan|   sds|
+---+---+-------+------+

输出:-

+---+---+--------------------------------------------------------------------+
|id |age|values                                                              
|
+---+---+--------------------------------------------------------------------+
|1  |12 |[{"number": "uber", "name": "smith"}]                                   
|
|2  |13 |[{"number": "lunch", "name": "jon"}]                                     
|
|3  |15 |[{"number": "rental", "name": "megan"}, {"number": "sds", "name": "jocelyn"}]|
+---+---+--------------------------------------------------------------------+

我的代码

from pyspark.sql import SparkSession
from pyspark.sql.types import ArrayType, StructField, StructType, StringType, IntegerType
# List
data = [(1,12,"smith", "uber"),
        (2,13,"jon","lunch"),(3,15,"jocelyn","rental")
        ,(3,15,"megan","sds")
        ]

# Create a schema for the dataframe
schema = StructType([
  StructField('id', IntegerType(), True),
  StructField('age', IntegerType(), True),
  StructField('number', StringType(), True),
    StructField('name', StringType(), True)])

# Convert list to RDD
rdd = spark.sparkContext.parallelize(data)

# Create data frame
df = spark.createDataFrame(rdd,schema)

我尝试使用 collect_list 和 collect_set,但无法获得所需的输出.

I tried using collect_list and collect_set, was not able to get the desired ouput.

推荐答案

您可以使用 collect_listto_json 为每个组收集一个 json 数组:

You can use collect_list and to_json to collect an array of jsons for each group:

import pyspark.sql.functions as F

df2 = df.groupBy(
    'id', 'age'
).agg(
    F.collect_list(
        F.to_json(
            F.struct('number', 'name')
        )
    ).alias('values')
).orderBy(
    'id', 'age'
)

df2.show(truncate=False)
+---+---+-----------------------------------------------------------------------+
|id |age|values                                                                 |
+---+---+-----------------------------------------------------------------------+
|1  |12 |[{"number":"smith","name":"uber"}]                                     |
|2  |13 |[{"number":"jon","name":"lunch"}]                                      |
|3  |15 |[{"number":"jocelyn","name":"rental"}, {"number":"megan","name":"sds"}]|
+---+---+-----------------------------------------------------------------------+

这篇关于将pyspark数据帧转换为嵌套的json结构的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆