可以在Spark中保留Dataframe联接的顺序吗? [英] Can Dataframe joins in Spark preserve order?

查看:95
本文介绍了可以在Spark中保留Dataframe联接的顺序吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试将两个DataFrame结合在一起,但在其中一个Dataframe中保持相同的顺序.

I'm currently trying to join two DataFrames together but retain the same order in one of the Dataframes.

哪些操作保留了RDD顺序?,似乎(如果这是不准确的,因为我是Spark的新手.联接不会保留顺序,因为由于数据位于不同的分区中,所以行以未指定的顺序联接/到达"最终数据帧,而不是按指定的顺序到达.

From Which operations preserve RDD order?, it seems that (correct me if this is inaccurate because I'm new to Spark) joins do not preserve order because rows are joined / "arrive" at the final dataframe not in a specified order due to the data being in different partitions.

在保留一个表的顺序的同时,如何执行两个DataFrame的联接?

How could one perform a join of two DataFrames while preserving the order of one table?

例如

+ ------------ + --------- +|col1 |col2 |+ ------------ + --------- +|0 |一个||1 |b |+ ------------ + --------- +

加入

+ ------------ + --------- +|col2 |col3 |+ ------------ + --------- +|b |x ||一个|y |+ ------------ + --------- +

col2 上应该给出

+ ------------ + -------------------- +|col1 |col2 |第3列|+ ------------ + --------- + ---------- +|0 |一个|y ||1 |b |x |+ ------------ + --------- + ---------- +

我听说过有关使用 coalesce repartition 的一些信息,但是我不确定.任何建议/方法/见解均表示赞赏.

I've heard some things about using coalesce or repartition, but I'm not sure. Any suggestions/methods/insights are appreciated.

编辑:这类似于在MapReduce中使用一个reducer吗?如果是这样,在Spark中会是什么样子?

Edit: would this be analogous to having one reducer in MapReduce? If so, how would that look like in Spark?

推荐答案

不能.您可以添加 monotonically_increasing_id 并在加入后重新排序数据.

It can't. You can add monotonically_increasing_id and reorder data after join.

这篇关于可以在Spark中保留Dataframe联接的顺序吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆