PySpark 将字符串化的字典数组分解为行 [英] PySpark explode stringified array of dictionaries into rows

查看:38
本文介绍了PySpark 将字符串化的字典数组分解为行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有 StringType 列 (edges) 的 pyspark 数据框,其中包含一个字典列表(请参见下面的示例).字典包含混合的值类型,包括另一个字典 (nodeIDs).我需要将 edges 字段中的顶级字典分解成行;理想情况下,我应该能够将它们的组件值转换为单独的字段.

I have a pyspark dataframe with StringType column (edges), which contains a list of dictionaries (see example below). The dictionaries contain a mix of value types, including another dictionary (nodeIDs). I need to explode the top-level dictionaries in the edges field into rows; ideally, I should then be able to convert their component values into separate fields.

输入:

import findspark
findspark.init()

SPARK = SparkSession.builder.enableHiveSupport() \
                    .getOrCreate()

data = [
    Row(trace_uuid='aaaa', timestamp='2019-05-20T10:36:33+02:00', edges='[{"distance":4.382441320292239,"duration":1.5,"speed":2.9,"nodeIDs":{"nodeA":954752475,"nodeB":1665827480}},{"distance":14.48582171131768,"duration":2.6,"speed":5.6,"nodeIDs":{"nodeA":1665827480,"nodeB":3559056131}}]', count=156, level=36),
    Row(trace_uuid='bbbb', timestamp='2019-05-20T11:36:10+03:00', edges='[{"distance":0,"duration":0,"speed":0,"nodeIDs":{"nodeA":520686131,"nodeB":520686216}},{"distance":8.654358326561642,"duration":3.1,"speed":2.8,"nodeIDs":{"nodeA":520686216,"nodeB":506361795}}]', count=179, level=258)
    ]

df = SPARK.createDataFrame(data)

期望的输出:

    data_reshaped = [
        Row(trace_uuid='aaaa', timestamp='2019-05-20T10=36=33+02=00', distance=4.382441320292239, duration=1.5, speed=2.9, nodeA=954752475, nodeB=1665827480, count=156, level=36),
        Row(trace_uuid='aaaa', timestamp='2019-05-20T10=36=33+02=00', distance=16.134844841712574, duration=2.9,speed=5.6, nodeA=1665827480, nodeB=3559056131, count=156, level=36),
        Row(trace_uuid='bbbb', timestamp='2019-05-20T11=36=10+03=00', distance=0, duration=0, speed=0, nodeA=520686131, nodeB=520686216, count=179, level=258),
        Row(trace_uuid='bbbb', timestamp='2019-05-20T11=36=10+03=00', distance=8.654358326561642, duration=3.1, speed=2.8, nodeA=520686216, nodeB=506361795, count=179, level=258)
       ]

有没有办法做到这一点?我曾尝试先使用 castedges 字段转换为数组,但我不知道如何让它与混合数据类型一起工作.

Is there a way to do that? I've tried using cast to cast the edges field into an array first, but I can't figure out how to get it to work with the mixed data types.

我使用的是 Spark 2.4.0.

I'm using Spark 2.4.0.

推荐答案

您可以使用 from_json()schema_of_json() 来推断 JSON 模式.例如:

You can use from_json() with schema_of_json() to infer the JSON schema. for example:

from pyspark.sql import functions as F

# a sample json string:  
edges_json_sample = data[0].edges
# or edges_json_sample = df.select('edges').first()[0]

>>> edges_json_sample
#'[{"distance":4.382441320292239,"duration":1.5,"speed":2.9,"nodeIDs":{"nodeA":954752475,"nodeB":1665827480}},{"distance":14.48582171131768,"duration":2.6,"speed":5.6,"nodeIDs":{"nodeA":1665827480,"nodeB":3559056131}}]'

# infer schema from the sample string
schema = df.select(F.schema_of_json(edges_json_sample)).first()[0]

>>> schema
#u'array<struct<distance:double,duration:double,nodeIDs:struct<nodeA:bigint,nodeB:bigint>,speed:double>>'

# convert json string to data structure and then retrieve desired items
new_df = df.withColumn('data', F.explode(F.from_json('edges', schema))) \
           .select('*', 'data.*', 'data.nodeIDs.*') \
           .drop('data', 'nodeIDs', 'edges')
           
>>> new_df.show()
+-----+-----+--------------------+----------+-----------------+--------+-----+----------+----------+
|count|level|           timestamp|trace_uuid|         distance|duration|speed|     nodeA|     nodeB|
+-----+-----+--------------------+----------+-----------------+--------+-----+----------+----------+
|  156|   36|2019-05-20T10:36:...|      aaaa|4.382441320292239|     1.5|  2.9| 954752475|1665827480|
|  156|   36|2019-05-20T10:36:...|      aaaa|14.48582171131768|     2.6|  5.6|1665827480|3559056131|
|  179|  258|2019-05-20T11:36:...|      bbbb|              0.0|     0.0|  0.0| 520686131| 520686216|
|  179|  258|2019-05-20T11:36:...|      bbbb|8.654358326561642|     3.1|  2.8| 520686216| 506361795|
+-----+-----+--------------------+----------+-----------------+--------+-----+----------+----------+

# expected result
data_reshaped = new_df.rdd.collect()

这篇关于PySpark 将字符串化的字典数组分解为行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆