在Pyspark中将结构类型大量的列分解为键和值的两列 [英] Exploding struct type large number of columns to two columns of keys and values in Pyspark

查看:71
本文介绍了在Pyspark中将结构类型大量的列分解为键和值的两列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个pyspark df,其架构如下所示:

I have a pyspark df who's schema looks like this

 root
 |-- company: struct (nullable = true)
 |    |-- 0: long(nullable = true)
 |    |-- 1: long(nullable = true)
 |    |-- 10: long(nullable = true)
 |    |-- 100: long(nullable = true)
 |    |-- 101: long(nullable = true)
 |    |-- 102: long(nullable = true)
 |    |-- 103: long(nullable = true)
 |    |-- 104: long(nullable = true)
 |    |-- 105: long(nullable = true)
 |    |-- 106: long(nullable = true)
 |    |-- 107: long(nullable = true)
 |    |-- 108: long(nullable = true)
 |    |-- 109: long(nullable = true)

我希望此数据框的最终格式如下所示:

I want the final format of this dataframe to look like this

id    value
0     1001
1     1002
10    1004
100   1005
101   1007
102   1008

请帮助我使用Pyspark解决此问题.

Please help me to solve this using Pyspark.

推荐答案

在python中,您可以使用堆栈将其转换

In python you can convert it using stack

import pyspark.sql.functions as f
from functools import reduce
df1 = df.select('company.*')
cols = ','.join([f"'{i[0]}',`{i[1]}`" for i in zip(df1.columns,df1.columns)])


df1 = reduce(lambda df, c: df.withColumn(c, f.col(c).cast('string')), df1.columns, df1)

df1.select(f.expr(f'''stack({len(df1.columns)},{cols}) as (id, name)''')).show()

+---+----+
| id|name|
+---+----+
|  0| foo|
|  1| bar|
+---+----+

这篇关于在Pyspark中将结构类型大量的列分解为键和值的两列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆