如何在 Spark Scala 中将行数据转置/旋转到列? [英] How to transpose/pivot the rows data to column in Spark Scala?
本文介绍了如何在 Spark Scala 中将行数据转置/旋转到列?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我是 Spark-SQL 的新手.我在 Spark Dataframe 中有这样的信息
I am new to Spark-SQL. I have information in Spark Dataframe like this
Company Type Status
A X done
A Y done
A Z done
C X done
C Y done
B Y done
我想显示如下
Company X-type Y-type Z-type
A done done done
B pending done pending
C done done pending
我无法实现这是 Spark-SQL
I am not able to acheive this is Spark-SQL
请帮忙
推荐答案
您可以 groupby
Company 然后在 pivot
列上使用 pivot
功能强>类型
You can groupby
Company and then use pivot
function on column Type
这是一个简单的例子
import org.apache.spark.sql.functions._
val df = spark.sparkContext.parallelize(Seq(
("A", "X", "done"),
("A", "Y", "done"),
("A", "Z", "done"),
("C", "X", "done"),
("C", "Y", "done"),
("B", "Y", "done")
)).toDF("Company", "Type", "Status")
val result = df.groupBy("Company")
.pivot("Type")
.agg(expr("coalesce(first(Status), \"pending\")"))
result.show()
输出:
+-------+-------+----+-------+
|Company| X| Y| Z|
+-------+-------+----+-------+
| B|pending|done|pending|
| C| done|done|pending|
| A| done|done| done|
+-------+-------+----+-------+
您可以稍后重命名该列.
You can rename the column later.
希望这会有所帮助!
这篇关于如何在 Spark Scala 中将行数据转置/旋转到列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文