Apache Spark根据另一行更新RDD或数据集中的一行 [英] Apache Spark update a row in an RDD or Dataset based on another row

查看:133
本文介绍了Apache Spark根据另一行更新RDD或数据集中的一行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图弄清楚如何基于另一行来更新某些行.

I'm trying to figure how I can update some rows based on another another row.

例如,我有一些数据,例如

For example, I have some data like

Id | useraname | ratings | city
--------------------------------
1, philip, 2.0, montreal, ...
2, john, 4.0, montreal, ...
3, charles, 2.0, texas, ...

我想将同一城市中的用户更新为相同的groupId(1或2)

I want to update the users in the same city to the same groupId (either 1 or 2)

Id | useraname | ratings | city
--------------------------------
1, philip, 2.0, montreal, ...
1, john, 4.0, montreal, ...
3, charles, 2.0, texas, ...

如何在RDD或数据集中实现这一目标?

How can I achieve this in my RDD or Dataset ?

因此,仅出于完整性考虑,如果Id是字符串,那么密列将不起作用吗?

So just for sake of completeness, what if the Id is a String, the dense rank won't work ?

例如?

Id | useraname | ratings | city
--------------------------------
a, philip, 2.0, montreal, ...
b, john, 4.0, montreal, ...
c, charles, 2.0, texas, ...

所以结果看起来像这样:

So the result looks like this:

grade | useraname | ratings | city
--------------------------------
a, philip, 2.0, montreal, ...
a, john, 4.0, montreal, ...
c, charles, 2.0, texas, ...

推荐答案

一种干净的方法是使用Window函数中的dense_rank().它枚举Window列中的唯一值.因为cityString列,所以它们将按字母顺序递增.

A clean way to do this would be to use dense_rank() from Window functions. It enumerates the unique values in your Window column. Because city is a String column, these will be increasing alphabetically.

import org.apache.spark.sql.functions.rank
import org.apache.spark.sql.expressions.Window

val df = spark.createDataFrame(Seq(
  (1, "philip", 2.0, "montreal"),
  (2, "john", 4.0, "montreal"),
  (3, "charles", 2.0, "texas"))).toDF("Id", "username", "rating", "city")

val w = Window.orderBy($"city")
df.withColumn("id", rank().over(w)).show()

+---+--------+------+--------+
| id|username|rating|    city|
+---+--------+------+--------+
|  1|  philip|   2.0|montreal|
|  1|    john|   4.0|montreal|
|  2| charles|   2.0|   texas|
+---+--------+------+--------+

这篇关于Apache Spark根据另一行更新RDD或数据集中的一行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆