值 toDF 不是 org.apache.spark.rdd.RDD 的成员 [英] value toDF is not a member of org.apache.spark.rdd.RDD
问题描述
例外:
val people = sc.textFile("resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF()
value toDF is not a member of org.apache.spark.rdd.RDD[Person]
这是TestApp.scala
文件:
package main.scala
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
case class Record1(k: Int, v: String)
object RDDToDataFramesWithCaseClasses {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Simple Spark SQL Application With RDD To DF")
// sc is an existing SparkContext.
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
// this is used to implicitly convert an RDD to a DataFrame.
import sqlContext.implicits._
// Define the schema using a case class.
// Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit,package main.scala
和TestApp.scala
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
case class Record1(k: Int, v: String)
object RDDToDataFramesWithCaseClasses {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("RDD To DF")
// sc is an existing SparkContext.
// you can use custom classes that implement the Product interface.
case class Person(name: String, age: Int)
// Create an RDD of Person objects and register it as a table.
val people = sc.textFile("resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF()
people.registerTempTable("people")
// SQL statements can be run by using the sql methods provided by sqlContext.
val teenagers = sqlContext.sql("SELECT name, age FROM people WHERE age >= 13 AND age <= 19")
// The results of SQL queries are DataFrames and support all the normal RDD operations.
// The columns of a row in the result can be accessed by field index:
teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
// or by field name:
teenagers.map(t => "Name: " + t.getAs[String]("name")).collect().foreach(println)
// row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]
teenagers.map(_.getValuesMap[Any](List("name", "age"))).collect().foreach(println)
// Map("name" -> "Justin", "age" -> 19)
}
}
和SBT文件
name := "SparkScalaRDBMS"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.1"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.5.1"
推荐答案
现在我找到原因了,你应该在对象和主函数之外定义case类.看这里
now i found the reason, you should define case class in the object and outof the main function. look at here
好的,我终于解决了这个问题.需要做的两件事:
Ok, I finally fixed the issue. 2 things needed to be done:
导入隐式:请注意,只有在创建 org.apache.spark.sql.SQLContext 的实例后才应执行此操作.应该写成:
Import implicits: Note that this should be done only after an instance of org.apache.spark.sql.SQLContext is created. It should be written as:
val sqlContext= new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
将案例类移到方法之外:案例类,通过使用它来定义 DataFrame 的架构,应该在需要它的方法之外定义.您可以在此处阅读更多相关信息:https://issues.scala-lang.org/browse/SI-6649
Move case class outside of the method: case class, by use of which you define the schema of the DataFrame, should be defined outside of the method needing it. You can read more about it here: https://issues.scala-lang.org/browse/SI-6649
这篇关于值 toDF 不是 org.apache.spark.rdd.RDD 的成员的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!