如何对SparkR数据帧进行子集 [英] How to subset SparkR data frame

查看:71
本文介绍了如何对SparkR数据帧进行子集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我们有一个数据集"people",其中包含ID和Age作为2乘3的矩阵.

Assume we have a dataset 'people' which contains ID and Age as a 2 times 3 matrix.

Id = 1 2 3
Age= 21 18 30

在sparkR中,我想创建一个新的数据集 people2 ,其中包含所有早于18的ID.在这种情况下,它是ID 1和3.在sparkR中,我会这样做

In sparkR I want to create a new dataset people2 which contains all ID who are older than 18. In this case it's ID 1 and 3. In sparkR I would do this

people2 <- people$Age > 18

但是它不起作用.您将如何创建新的数据集?

but it does not work. How would you create the new dataset?

推荐答案

对于那些欣赏R可以执行任何给定任务的众多选择的人,您还可以使用SparkR :: subset()函数:

For those who appreciate R's multitude of options to do any given task, you can also use the SparkR::subset() function:

> people <- createDataFrame(sqlContext, data.frame(Id=1:3, Age=c(21, 18, 30)))
> people2 <- subset(people, people$Age > 18, select = c(1,2))
> head(people2)
  Id Age
1  1  21
2  3  30

要回答评论中的其他详细信息:

To answer the additional detail in the comment:

id <- 1:99
age <- 99:1
myRDF <- data.frame(id, age)
mySparkDF <- createDataFrame(sqlContext, myRDF)

newSparkDF <- subset(mySparkDF, 
        mySparkDF$id==3 | mySparkDF$id==32 | mySparkDF$id==43 | mySparkDF$id==55, 
        select = 1:2)
take(newSparkDF,5)

(1) Spark Jobs
  id age
1  3  97
2 32  68
3 43  57
4 55  45

这篇关于如何对SparkR数据帧进行子集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆