pyspark数据帧中的完全外部联接 [英] Full outer join in pyspark data frames

查看:63
本文介绍了pyspark数据帧中的完全外部联接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 pyspark 中创建了两个数据框,如下所示.在这些数据帧中,我有一列 id .我想对这两个数据帧执行完全外部联接.

I have created two data frames in pyspark like below. In these data frames I have column id. I want to perform a full outer join on these two data frames.

valuesA = [('Pirate',1),('Monkey',2),('Ninja',3),('Spaghetti',4)]
a = sqlContext.createDataFrame(valuesA,['name','id'])

a.show()
+---------+---+
|     name| id|
+---------+---+
|   Pirate|  1|
|   Monkey|  2|
|    Ninja|  3|
|Spaghetti|  4|
+---------+---+


valuesB = [('dave',1),('Thor',2),('face',3), ('test',5)]
b = sqlContext.createDataFrame(valuesB,['Movie','id'])

b.show()
+-----+---+
|Movie| id|
+-----+---+
| dave|  1|
| Thor|  2|
| face|  3|
| test|  5|
+-----+---+


full_outer_join = a.join(b, a.id == b.id,how='full')
full_outer_join.show()

+---------+----+-----+----+
|     name|  id|Movie|  id|
+---------+----+-----+----+
|   Pirate|   1| dave|   1|
|   Monkey|   2| Thor|   2|
|    Ninja|   3| face|   3|
|Spaghetti|   4| null|null|
|     null|null| test|   5|
+---------+----+-----+----+

当我执行 full_outer_join

+---------+-----+----+
|     name|Movie|  id|
+---------+-----+----+
|   Pirate| dave|   1|
|   Monkey| Thor|   2|
|    Ninja| face|   3|
|Spaghetti| null|   4|
|     null| test|   5|
+---------+-----+----+

我已经完成了以下操作,但是得到了一些不同的结果

I have done like below but getting some different result

full_outer_join = a.join(b, a.id == b.id,how='full').select(a.id, a.name, b.Movie)
full_outer_join.show()
+---------+----+-----+
|     name|  id|Movie|
+---------+----+-----+
|   Pirate|   1| dave|
|   Monkey|   2| Thor|
|    Ninja|   3| face|
|Spaghetti|   4| null|
|     null|null| test|
+---------+----+-----+

您可以看到我在结果数据框中中缺少了 Id 5 .

As you can see that I am missing Id 5 in my result data frame.

我如何实现自己想要的?

How can I achieve what I want?

推荐答案

由于联接列具有相同的名称,因此可以将联接列指定为列表:

Since the join columns have the same name, you can specify the join columns as a list:

a.join(b, ['id'], how='full').show()
+---+---------+-----+
| id|     name|Movie|
+---+---------+-----+
|  5|     null| test|
|  1|   Pirate| dave|
|  3|    Ninja| face|
|  2|   Monkey| Thor|
|  4|Spaghetti| null|
+---+---------+-----+

coalesce 的两个 id 列:

import pyspark.sql.functions as F
a.join(b, a.id == b.id, how='full').select(
    F.coalesce(a.id, b.id).alias('id'), a.name, b.Movie
).show()
+---+---------+-----+
| id|     name|Movie|
+---+---------+-----+
|  5|     null| test|
|  1|   Pirate| dave|
|  3|    Ninja| face|
|  2|   Monkey| Thor|
|  4|Spaghetti| null|
+---+---------+-----+

这篇关于pyspark数据帧中的完全外部联接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆