在Spark版本2.2中使用row_number()函数在PySpark DataFrame中创建每行的行号 [英] Creating a row number of each row in PySpark DataFrame using row_number() function with Spark version 2.2

查看:2753
本文介绍了在Spark版本2.2中使用row_number()函数在PySpark DataFrame中创建每行的行号的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个PySpark DataFrame-

I am having a PySpark DataFrame -

valuesCol = [('Sweden',31),('Norway',62),('Iceland',13),('Finland',24),('Denmark',52)]
df = sqlContext.createDataFrame(valuesCol,['name','id'])
+-------+---+
|   name| id|
+-------+---+
| Sweden| 31|
| Norway| 62|
|Iceland| 13|
|Finland| 24|
|Denmark| 52|
+-------+---+

我希望在此DataFrame中添加一行列,这是该行的行号(序列号),如下所示-

I wish to add a row column to this DataFrame, which is the row number (serial number) of the row, like shown below -

我的最终输出应该是:

+-------+---+--------+
|   name| id|row_num |
+-------+---+--------+
| Sweden| 31|       1|
| Norway| 62|       2|
|Iceland| 13|       3|
|Finland| 24|       4|
|Denmark| 52|       5|
+-------+---+--------+

我的Spark版本是2.2

My Spark version is 2.2

我正在尝试此代码,但是它不起作用-

I am trying this code, but it doesn't work -

from pyspark.sql.functions import row_number
from pyspark.sql.window import Window
w = Window().orderBy()
df = df.withColumn("row_num", row_number().over(w))
df.show()

我遇到错误:

AnalysisException: 'Window function row_number() requires window to be ordered, please add ORDER BY clause. For example SELECT row_number()(value_expr) OVER (PARTITION BY window_partition ORDER BY window_ordering) from table;'

如果我理解正确,我需要对某些列进行排序,但是我不希望这样w = Window().orderBy('id'),因为那样会重新排序整个DataFrame.

If I understand it correctly, I need to order some column, but I don't want something like this w = Window().orderBy('id') because that will reorder the entire DataFrame.

有人可以建议如何使用row_number()函数实现上述输出吗?

Can anyone suggest how to achieve the above mentioned output using row_number() function?

推荐答案

您应为order子句定义列.如果您不需要订购值,请编写一个虚拟值.请尝试以下;

You should define column for order clause. If you don't need to order values then write a dummy value. Try below;

from pyspark.sql.functions import row_number,lit
from pyspark.sql.window import Window
w = Window().orderBy(lit('A'))
df = df.withColumn("row_num", row_number().over(w))

这篇关于在Spark版本2.2中使用row_number()函数在PySpark DataFrame中创建每行的行号的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆