PySpark:如何从Spark数据框架创建嵌套的JSON? [英] PySpark: How to create a nested JSON from spark data frame?
问题描述
我正在尝试从我的spark数据帧创建一个嵌套的json,该json具有以下结构的数据.下面的代码使用键和值创建一个简单的json.你能帮忙吗
I am trying to create a nested json from my spark dataframe which has data in following structure. The below code is creating a simple json with key and value. Could you please help
df.coalesce(1).write.format('json').save(data_output_file+"createjson.json", overwrite=True)
Update1: 按照@MaxU的答案,我将spark数据帧转换为pandas并使用了group by.它将最后两个字段放入嵌套数组中.我如何首先将类别和计数放入嵌套数组中,然后在该数组内部我要放入子类别和计数.
Update1: As per @MaxU answer,I converted the spark data frame to pandas and used group by. It is putting the last two fields in a nested array. How could i first put the category and count in nested array and then inside that array i want to put subcategory and count.
示例文本数据:
Vendor_Name,count,Categories,Category_Count,Subcategory,Subcategory_Count
Vendor1,10,Category 1,4,Sub Category 1,1
Vendor1,10,Category 1,4,Sub Category 2,2
Vendor1,10,Category 1,4,Sub Category 3,3
Vendor1,10,Category 1,4,Sub Category 4,4
j = (data_pd.groupby(['vendor_name','vendor_Cnt','Category','Category_cnt'], as_index=False)
.apply(lambda x: x[['Subcategory','subcategory_cnt']].to_dict('r'))
.reset_index()
.rename(columns={0:'subcategories'})
.to_json(orient='records'))
[{
"vendor_name": "Vendor 1",
"count": 10,
"categories": [{
"name": "Category 1",
"count": 4,
"subCategories": [{
"name": "Sub Category 1",
"count": 1
},
{
"name": "Sub Category 2",
"count": 1
},
{
"name": "Sub Category 3",
"count": 1
},
{
"name": "Sub Category 4",
"count": 1
}
]
}]
推荐答案
在python/pandas中执行此操作的最简单方法是使用一系列嵌套生成器,并使用groupby
我认为:
The easiest way to do this in python/pandas would be to use a series of nested generators using groupby
I think:
def split_df(df):
for (vendor, count), df_vendor in df.groupby(["Vendor_Name", "count"]):
yield {
"vendor_name": vendor,
"count": count,
"categories": list(split_category(df_vendor))
}
def split_category(df_vendor):
for (category, count), df_category in df_vendor.groupby(
["Categories", "Category_Count"]
):
yield {
"name": category,
"count": count,
"subCategories": list(split_subcategory(df_category)),
}
def split_subcategory(df_category):
for row in df.itertuples():
yield {"name": row.Subcategory, "count": row.Subcategory_Count}
list(split_df(df))
[
{
"vendor_name": "Vendor1",
"count": 10,
"categories": [
{
"name": "Category 1",
"count": 4,
"subCategories": [
{"name": "Sub Category 1", "count": 1},
{"name": "Sub Category 2", "count": 2},
{"name": "Sub Category 3", "count": 3},
{"name": "Sub Category 4", "count": 4},
],
}
],
}
]
要将其导出到json
,您需要一种方法来导出np.int64
To export this to json
, you'll need a way to export the np.int64
这篇关于PySpark:如何从Spark数据框架创建嵌套的JSON?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!