spark - scala:不是 org.apache.spark.sql.Row 的成员 [英] spark - scala: not a member of org.apache.spark.sql.Row
问题描述
我正在尝试将数据帧转换为 RDD,然后执行下面的一些操作以返回元组:
I am trying to convert a data frame to RDD, then perform some operations below to return tuples:
df.rdd.map { t=>
(t._2 + "_" + t._3 , t)
}.take(5)
然后我得到了下面的错误.谁有想法?谢谢!
Then I got the error below. Anyone have any ideas? Thanks!
<console>:37: error: value _2 is not a member of org.apache.spark.sql.Row
(t._2 + "_" + t._3 , t)
^
推荐答案
当你将 DataFrame 转换为 RDD 时,你会得到一个 RDD[Row]
,所以当你使用 map
时code>,您的函数接收一个 Row
作为参数.因此,必须使用Row
方法来访问其成员(注意索引从0开始):
When you convert a DataFrame to RDD, you get an RDD[Row]
, so when you use map
, your function receives a Row
as parameter. Therefore, you must use the Row
methods to access its members (note that the index starts from 0):
df.rdd.map {
row: Row => (row.getString(1) + "_" + row.getString(2), row)
}.take(5)
您可以在 Spark scaladoc.
You can view more examples and check all methods available for Row
objects in the Spark scaladoc.
我不知道您执行此操作的原因,但是为了连接 DataFrame 的 String 列,您可以考虑以下选项:
I don't know the reason why you are doing this operation, but for concatenating String columns of a DataFrame you may consider the following option:
import org.apache.spark.sql.functions._
val newDF = df.withColumn("concat", concat(df("col2"), lit("_"), df("col3")))
这篇关于spark - scala:不是 org.apache.spark.sql.Row 的成员的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!