获取Spark RDD的一系列列 [英] Get a range of columns of Spark RDD
问题描述
现在,我的RDD中有300多个列,但是我发现有必要动态选择一系列列并将其放入LabledPoints数据类型.作为Spark的新手,我想知道是否有任何索引方法可以选择RDD中的列范围. R中的temp_data = data[, 101:211]
之类的东西.val temp_data = data.filter(_.column_index in range(101:211)...
之类的东西吗?
Now I have 300+ columns in my RDD, but I found there is a need to dynamically select a range of columns and put them into LabledPoints data type. As a newbie to Spark, I am wondering if there is any index way to select a range of columns in RDD. Something like temp_data = data[, 101:211]
in R. Is there something like val temp_data = data.filter(_.column_index in range(101:211)...
?
任何想法都受到欢迎和赞赏.
Any thought is welcomed and appreciated.
推荐答案
如果它是DataFrame,则应该可以使用以下方法:
If it is a DataFrame, then something like this should work:
val df = rdd.toDF
df.select(df.columns.slice(101,211) : _*)
这篇关于获取Spark RDD的一系列列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!