使用sparklyr在R中调用Spark窗口函数 [英] Calling spark window functions in R using sparklyr
本文介绍了使用sparklyr在R中调用Spark窗口函数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我一直在尝试在sparklyr中复制以下pyspark代码段,但没有运气.
I have been trying to replicate the following pyspark snippet in sparklyr but no luck.
from pyspark.sql.window import Window
from pyspark.sql.functions import concat, col, lit, approx_count_distinct, countDistinct
df = spark.sql("select * from mtcars")
dff = df.withColumn("test", concat(col("gear"), lit(" "), col("carb")))
w = Window.partitionBy("cyl").orderBy("cyl")
dff.withColumn("distinct", approx_count_distinct("test").over(w)).show()
我确实设法使连接位像这样工作:
The concatenate bit I did manage to get to work like so:
tbl(sc, "mtcars")%>%
spark_dataframe() %>%
invoke("withColumn",
"concat",
invoke_static(sc, "org.apache.spark.sql.functions", "expr", "concat(gear, carb)")) %>%
sdf_register()
我似乎无法弄清楚如何调用 Window.partitionBy()
和 Window.orderBy()
I can't seem to figure out how to invoke the Window.partitionBy()
and Window.orderBy()
# Doesn't work
w <- invoke_static(sc, "org.apache.spark.sql.expressions.Window", "partitionBy", "cyl")
一些指针会很有帮助!
推荐答案
您可以直接通过管道传递SQL.
You can pipe the sql directly.
mtcars_spk <- copy_to(sc, mtcars,"mtcars_spk",overwrite = TRUE)
mtcars_spk2 <- mtcars_spk %>%
dplyr::mutate(test = paste0(gear, " ",carb)) %>%
dplyr::mutate(discnt = sql("approx_count_distinct(test) OVER (PARTITION BY cyl)"))
在这里值得注意的是,这是一种罕见的情况,sparklyr支持其他窗口功能.如果您只想将计数或最小(齿轮)按cyl划分,则可以轻松地做到这一点.
It is worth noting here that this is a rare case and other window functions are supported in sparklyr. If you wanted just the count or a min(gear) partitioned by cyl you could do that easily.
mtcars_spk <- copy_to(sc, mtcars,"mtcars_spk",overwrite = TRUE)
mtcars_spk <- mtcars_spk %>%
group_by(cyl) %>%
arrange(cyl) %>%
mutate(cnt = count()
,mindis= min(disp)
链接类似的线程:
这篇关于使用sparklyr在R中调用Spark窗口函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文