如何用下一行计算当前行? [英] How to calculate the current row with the next one?
问题描述
在 Spark-Sql 1.6 版本中,使用 DataFrame
s,有没有办法计算特定列的每一行当前行和下一行的总和?>
例如,如果我有一个只有一列的表格,就像这样
年龄12233167
我想要以下输出
总和355498
最后一行被删除,因为它没有要添加的下一行".
现在我通过对表格进行排名并将其与自身连接来实现,其中 rank
等于 rank+1
.
有没有更好的方法来做到这一点?这可以用 Window
函数完成吗?
是的,您绝对可以通过使用 rowsBetween
函数来使用 Window
函数.在下面的示例中,我将 person
列用于 grouping
目的.
import sqlContext.implicits._导入 org.apache.spark.sql.functions._val 数据框 = Seq(("A",12),("A",23),("A",31),("A",67)).toDF("人", "年龄")val windowSpec = Window.partitionBy("person").orderBy("Age").rowsBetween(0, 1)val newDF = dataframe.withColumn("sum", sum(dataframe("Age")) over(windowSpec))newDF.filter(!(newDF("Age") === newDF("sum"))).show
In Spark-Sql version 1.6, using DataFrame
s, is there a way to calculate, for a specific column, the sum of the current row and the next one, for every row?
For example, if I have a table with one column, like so
Age
12
23
31
67
I'd like the following output
Sum
35
54
98
The last row is dropped because it has no "next row" to be added to.
Right now I am doing it by ranking the table and joining it with itself, where the rank
is equals to rank+1
.
Is there a better way to do this?
Can this be done with a Window
function?
Yes definitely you can do with Window
function by using rowsBetween
function. I have used person
column for grouping
purpose in my following example.
import sqlContext.implicits._
import org.apache.spark.sql.functions._
val dataframe = Seq(
("A",12),
("A",23),
("A",31),
("A",67)
).toDF("person", "Age")
val windowSpec = Window.partitionBy("person").orderBy("Age").rowsBetween(0, 1)
val newDF = dataframe.withColumn("sum", sum(dataframe("Age")) over(windowSpec))
newDF.filter(!(newDF("Age") === newDF("sum"))).show
这篇关于如何用下一行计算当前行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!