Spark 数据集:数据转换 [英] Spark Dataset : data transformation

查看:37
本文介绍了Spark 数据集:数据转换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个格式的 Spark 数据集 -

I have a Spark Dataset of the format -

+--------------+--------+-----+
|name          |type    |cost |
+--------------+--------+-----+
|AAAAAAAAAAAAAA|XXXXX   |0.24|
|AAAAAAAAAAAAAA|YYYYY   |1.14|
|BBBBBBBBBBBBBB|XXXXX   |0.78|
|BBBBBBBBBBBBBB|YYYYY   |2.67|
|BBBBBBBBBBBBBB|ZZZZZ   |0.15|
|CCCCCCCCCCCCCC|XXXXX   |1.86|
|CCCCCCCCCCCCCC|YYYYY   |1.50|
|CCCCCCCCCCCCCC|ZZZZZ   |1.00|
+--------------+--------+----+

我想把它转换成一个类型的对象 -

I want to transform this into an object of type -

public class CostPerName {
    private String name;
    private Map<String, Double> costTypeMap;
}

我想要的是,

+--------------+-----------------------------------------------+
|name          |           typeCost.                           |
+--------------+-----------------------------------------------+
|AAAAAAAAAAAAAA|(XXXXX, 0.24), (YYYYY, 1.14)                   |            
|BBBBBBBBBBBBBB|(XXXXX, 0.78), (YYYYY, 2.67), (ZZZZZ, 0.15)    |
|CCCCCCCCCCCCCC|(XXXXX, 1.86), (YYYYY, 1.50), (ZZZZZ, 1.00)    |
+--------------+-----------------------------------------------+

即,对于每个 name,我想要一个 (type, cost) 的映射.

i.e., for each name, I want to a map of (type, cost).

实现这种转变的有效方法是什么?我可以使用一些数据帧转换吗?我尝试过 groupBy 但只有在我执行汇总查询(如 sum、avg 等)时才有效.

What is an efficient way to achieve this transformation? Can I use some dataFrame transformation? I tried groupBy but that will only work if I am performing aggregate queries like sum, avg etc.

推荐答案

您可以使用 map_from_arrays() 如果您的 Spark 版本允许:

You can use a map_from_arrays() if your Spark version allows it:

scala> val df2 = df.groupBy("name").agg(map_from_arrays(collect_list($"type"), collect_list($"cost")).as("typeCost"))
df2: org.apache.spark.sql.DataFrame = [name: string, typeCost: map<string,decimal(3,2)>]

scala> df2.printSchema()
root
 |-- name: string (nullable = false)
 |-- typeCost: map (nullable = true)
 |    |-- key: string
 |    |-- value: decimal(3,2) (valueContainsNull = true)

scala> df2.show(false)
+--------------+---------------------------------------------+
|name          |typeCost                                     |
+--------------+---------------------------------------------+
|AAAAAAAAAAAAAA|[XXXXX -> 0.24, YYYYY -> 1.14]               |
|CCCCCCCCCCCCCC|[XXXXX -> 1.86, YYYYY -> 1.50, ZZZZZ -> 1.00]|
|BBBBBBBBBBBBBB|[XXXXX -> 0.78, YYYYY -> 2.67, ZZZZZ -> 0.15]|
+--------------+---------------------------------------------+

scala>

这篇关于Spark 数据集:数据转换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆