根据 RDD/Spark DataFrame 中的特定列从行中删除重复项 [英] Removing duplicates from rows based on specific columns in an RDD/Spark DataFrame

查看:74
本文介绍了根据 RDD/Spark DataFrame 中的特定列从行中删除重复项的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一个如下形式的相当大的数据集:

data = sc.parallelize([('Foo',41,'US',3),('Foo',39,'UK',1),('酒吧',57,'CA',2),('酒吧',72,'CA',2),('Baz',22,'US',6),('Baz',36,'US',6)])

我想做的是仅根据第一、第三和第四列的值删除重复的行.

删除完全重复的行很简单:

data = data.distinct()

第 5 行或第 6 行将被删除

但是如何仅根据第 1、3 和 4 列删除重复的行?即删除其中之一:

('Baz',22,'US',6)('巴兹',36,'美国',6)

在 Python 中,这可以通过使用 .drop_duplicates() 指定列来完成.如何在 Spark/Pyspark 中实现相同的目标?

解决方案

Pyspark 确实 包括一个 dropDuplicates() 方法,该方法是在 1.4 中引入的.https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.dropDuplicates

<预><代码>>>>从 pyspark.sql 导入行>>>df = sc.parallelize([ \... Row(name='Alice', age=5, height=80), \... Row(name='Alice', age=5, height=80), \... Row(name='Alice', age=10, height=80)]).toDF()>>>df.dropDuplicates().show()+---+------+-----+|年龄|身高|姓名|+---+------+-----+|5|80|爱丽丝||10|80|爱丽丝|+---+------+-----+>>>df.dropDuplicates(['name', 'height']).show()+---+------+-----+|年龄|身高|姓名|+---+------+-----+|5|80|爱丽丝|+---+------+-----+

Let's say I have a rather large dataset in the following form:

data = sc.parallelize([('Foo',41,'US',3),
                       ('Foo',39,'UK',1),
                       ('Bar',57,'CA',2),
                       ('Bar',72,'CA',2),
                       ('Baz',22,'US',6),
                       ('Baz',36,'US',6)])

What I would like to do is remove duplicate rows based on the values of the first,third and fourth columns only.

Removing entirely duplicate rows is straightforward:

data = data.distinct()

and either row 5 or row 6 will be removed

But how do I only remove duplicate rows based on columns 1, 3 and 4 only? i.e. remove either one one of these:

('Baz',22,'US',6)
('Baz',36,'US',6)

In Python, this could be done by specifying columns with .drop_duplicates(). How can I achieve the same in Spark/Pyspark?

解决方案

Pyspark does include a dropDuplicates() method, which was introduced in 1.4. https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.dropDuplicates

>>> from pyspark.sql import Row
>>> df = sc.parallelize([ \
...     Row(name='Alice', age=5, height=80), \
...     Row(name='Alice', age=5, height=80), \
...     Row(name='Alice', age=10, height=80)]).toDF()
>>> df.dropDuplicates().show()
+---+------+-----+
|age|height| name|
+---+------+-----+
|  5|    80|Alice|
| 10|    80|Alice|
+---+------+-----+

>>> df.dropDuplicates(['name', 'height']).show()
+---+------+-----+
|age|height| name|
+---+------+-----+
|  5|    80|Alice|
+---+------+-----+

这篇关于根据 RDD/Spark DataFrame 中的特定列从行中删除重复项的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆