Spark SQL Row_number() PartitionBy Sort Desc [英] Spark SQL Row_number() PartitionBy Sort Desc
本文介绍了Spark SQL Row_number() PartitionBy Sort Desc的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我已经在 Spark 中使用 Window 成功创建了一个 row_number()
partitionBy
,但是我想通过降序而不是默认的升序对其进行排序.这是我的工作代码:
I've successfully create a row_number()
partitionBy
by in Spark using Window, but would like to sort this by descending, instead of the default ascending. Here is my working code:
from pyspark import HiveContext
from pyspark.sql.types import *
from pyspark.sql import Row, functions as F
from pyspark.sql.window import Window
data_cooccur.select("driver", "also_item", "unit_count",
F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count")).alias("rowNum")).show()
这给了我这个结果:
+------+---------+----------+------+
|driver|also_item|unit_count|rowNum|
+------+---------+----------+------+
| s10| s11| 1| 1|
| s10| s13| 1| 2|
| s10| s17| 1| 3|
这里我添加了 desc() 以降序排列:
And here I add the desc() to order descending:
data_cooccur.select("driver", "also_item", "unit_count", F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count").desc()).alias("rowNum")).show()
并得到这个错误:
AttributeError: 'WindowSpec' 对象没有属性 'desc'
AttributeError: 'WindowSpec' object has no attribute 'desc'
我在这里做错了什么?
推荐答案
desc
应该应用于列而不是窗口定义.您可以对列使用任一方法:
desc
should be applied on a column not a window definition. You can use either a method on a column:
from pyspark.sql.functions import col, row_number
from pyspark.sql.window import Window
F.row_number().over(
Window.partitionBy("driver").orderBy(col("unit_count").desc())
)
或独立函数:
from pyspark.sql.functions import desc
from pyspark.sql.window import Window
F.row_number().over(
Window.partitionBy("driver").orderBy(desc("unit_count"))
)
这篇关于Spark SQL Row_number() PartitionBy Sort Desc的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文