使用Spark SQL跳过/接受 [英] Skip/Take with Spark SQL

查看:192
本文介绍了使用Spark SQL跳过/接受的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何使用Spark SQL实现跳过/获取查询(典型的服务器端网格分页).我已经在网上搜寻过,只能在这里找到非常基本的示例,例如: https://databricks-training.s3.amazonaws.com/data-exploration-using-spark-sql.html

How would one go about implementing a skip/take query (typical server side grid paging) using Spark SQL. I have scoured the net and can only find very basic examples such as these here: https://databricks-training.s3.amazonaws.com/data-exploration-using-spark-sql.html

我没有看到像T-SQL那样的ROW_NUMBER()或OFFSET/FETCH的任何概念.有人知道如何做到这一点吗?

I don't see any concept of ROW_NUMBER() or OFFSET/FETCH like with T-SQL. Does anyone know how to accomplish this?

类似的东西:

scala > csc.sql("select * from users skip 10 limit 10").collect()

推荐答案

尝试如下操作:

val rdd = csc.sql("select * from <keyspace>.<table>")
val rdd2 = rdd.view.zipWithIndex()
rdd2.filter(x => { x._2 > 5 && x._2 < 10;}).collect()
rdd2.filter(x => { x._2 > 9 && x._2 < 12;}).collect()

这篇关于使用Spark SQL跳过/接受的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆