重命名 PySpark Dataframe 中的透视和聚合列 [英] Rename pivoted and aggregated column in PySpark Dataframe
本文介绍了重命名 PySpark Dataframe 中的透视和聚合列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
使用如下数据框:
from pyspark.sql.functions import avg, first
rdd = sc.parallelize(
[
(0, "A", 223,"201603", "PORT"),
(0, "A", 22,"201602", "PORT"),
(0, "A", 422,"201601", "DOCK"),
(1,"B", 3213,"201602", "DOCK"),
(1,"B", 3213,"201601", "PORT"),
(2,"C", 2321,"201601", "DOCK")
]
)
df_data = sqlContext.createDataFrame(rdd, ["id","type", "cost", "date", "ship"])
df_data.show()
我对它做了一个支点,
df_data.groupby(df_data.id, df_data.type).pivot("date").agg(avg("cost"), first("ship")).show()
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
| id|type|201601_avg(cost)|201601_first(ship)()|201602_avg(cost)|201602_first(ship)()|201603_avg(cost)|201603_first(ship)()|
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
| 2| C| 2321.0| DOCK| null| null| null| null|
| 0| A| 422.0| DOCK| 22.0| PORT| 223.0| PORT|
| 1| B| 3213.0| PORT| 3213.0| DOCK| null| null|
+---+----+----------------+--------------------+----------------+--------------------+----------------+--------------------+
但是我得到了这些非常复杂的列名称.在聚合上应用 alias
通常是有效的,但由于 pivot
在这种情况下名称更糟:
But I get these really complicated names for the columns. Applying alias
on the aggregation usually works, but because of the pivot
in this case the names are even worse:
+---+----+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+
| id|type|201601_(avg(cost),mode=Complete,isDistinct=false) AS cost#1619|201601_(first(ship)(),mode=Complete,isDistinct=false) AS ship#1620|201602_(avg(cost),mode=Complete,isDistinct=false) AS cost#1619|201602_(first(ship)(),mode=Complete,isDistinct=false) AS ship#1620|201603_(avg(cost),mode=Complete,isDistinct=false) AS cost#1619|201603_(first(ship)(),mode=Complete,isDistinct=false) AS ship#1620|
+---+----+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+
| 2| C| 2321.0| DOCK| null| null| null| null|
| 0| A| 422.0| DOCK| 22.0| PORT| 223.0| PORT|
| 1| B| 3213.0| PORT| 3213.0| DOCK| null| null|
+---+----+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+--------------------------------------------------------------+------------------------------------------------------------------+
有没有办法在数据透视和聚合中即时重命名列名?
Is there a way to rename the column names on the fly on the pivot and aggregation?
推荐答案
一个简单的正则表达式应该可以解决问题:
A simple regular expression should do the trick:
import re
def clean_names(df):
p = re.compile("^(\w+?)_([a-z]+)\((\w+)\)(?:\(\))?")
return df.toDF(*[p.sub(r"\1_\3", c) for c in df.columns])
pivoted = df_data.groupby(...).pivot(...).agg(...)
clean_names(pivoted).printSchema()
## root
## |-- id: long (nullable = true)
## |-- type: string (nullable = true)
## |-- 201601_cost: double (nullable = true)
## |-- 201601_ship: string (nullable = true)
## |-- 201602_cost: double (nullable = true)
## |-- 201602_ship: string (nullable = true)
## |-- 201603_cost: double (nullable = true)
## |-- 201603_ship: string (nullable = true)
如果您想保留函数名称,请将替换模式更改为例如 \1_\2_\3
.
If you want to preserve function name you change substitution pattern to for example \1_\2_\3
.
这篇关于重命名 PySpark Dataframe 中的透视和聚合列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文