Spark Dataframe基于列索引选择 [英] Spark Dataframe select based on column index

查看:400
本文介绍了Spark Dataframe基于列索引选择的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何选择Scala中具有某些索引的数据框的所有列?

How do I select all the columns of a dataframe that has certain indexes in Scala?

例如,如果一个数据框有100列,而我只想提取列(10,12,13,14,15),该怎么做?

For example if a dataframe has 100 columns and i want to extract only columns (10,12,13,14,15), how to do the same?

下面从数据框df中选择所有列,该数据框具有Array colNames中提到的列名:

Below selects all columns from dataframe df which has the column name mentioned in the Array colNames:

df = df.select(colNames.head,colNames.tail: _*)

如果有相似的colNos数组,则具有

If there is similar, colNos array which has

colNos = Array(10,20,25,45)

如何转换上面的df.select以仅获取特定索引处的那些列.

How do I transform the above df.select to fetch only those columns at the specific indexes.

推荐答案

您可以map通过columns:

import org.apache.spark.sql.functions.col

df.select(colNos map df.columns map col: _*)

或:

df.select(colNos map (df.columns andThen col): _*)

或:

df.select(colNos map (col _ compose df.columns): _*)

上面显示的所有方法都是等效的,不会造成性能损失.映射如下:

All the methods shown above are equivalent and don't impose performance penalty. Following mapping:

colNos map df.columns 

只是本地Array访问(每个用户<恒定时间访问索引)并在基于String或基于Columnselect变体之间进行选择不会影响执行计划:

is just a local Array access (constant time access for each index) and choosing between String or Column based variant of select doesn't affect the execution plan:

val df = Seq((1, 2, 3 ,4, 5, 6)).toDF

val colNos = Seq(0, 3, 5)

df.select(colNos map df.columns map col: _*).explain

== Physical Plan ==
LocalTableScan [_1#46, _4#49, _6#51]

df.select("_1", "_4", "_6").explain

== Physical Plan ==
LocalTableScan [_1#46, _4#49, _6#51]

这篇关于Spark Dataframe基于列索引选择的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆