值 toDF 不是 org.apache.spark.rdd.RDD 的成员 [英] value toDF is not a member of org.apache.spark.rdd.RDD

查看:32
本文介绍了值 toDF 不是 org.apache.spark.rdd.RDD 的成员的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在其他 SO 帖子中读到过这个问题,但我仍然不知道我做错了什么.原则上,添加这两行:

I've read about this issue in other SO posts and I still don't know what I'm doing wrong. In principle, adding these two lines:

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._

应该已经完成​​了,但错误仍然存​​在

should have done the trick but the error persists

这是我的 build.sbt:

This my build.sbt:

name := "PickACustomer"

version := "1.0"

scalaVersion := "2.11.7"


libraryDependencies ++= Seq("com.databricks" %% "spark-avro" % "2.0.1",
"org.apache.spark" %% "spark-sql" % "1.6.0",
"org.apache.spark" %% "spark-core" % "1.6.0")

我的Scala代码是:

and my scala code is:

import scala.collection.mutable.Map
import scala.collection.immutable.Vector

import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql._


    object Foo{

    def reshuffle_rdd(rawText: RDD[String]): RDD[Map[String, (Vector[(Double, Double, String)], Map[String, Double])]]  = {...}

    def do_prediction(shuffled:RDD[Map[String, (Vector[(Double, Double, String)], Map[String, Double])]], prediction:(Vector[(Double, Double, String)] => Map[String, Double]) ) : RDD[Map[String, Double]] = {...}

    def get_match_rate_from_results(results : RDD[Map[String, Double]]) : Map[String, Double]  = {...}


    def retrieve_duid(element: Map[String,(Vector[(Double, Double, String)], Map[String,Double])]): Double = {...}




    def main(args: Array[String]){
        val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
        if (!conf.getOption("spark.master").isDefined) conf.setMaster("local")

        val sc = new SparkContext(conf)

        //This should do the trick
        val sqlContext = new org.apache.spark.sql.SQLContext(sc)
        import sqlContext.implicits._

        val PATH_FILE = "/mnt/fast_export_file_clean.csv"
        val rawText = sc.textFile(PATH_FILE)
        val shuffled = reshuffle_rdd(rawText)

        // PREDICT AS A FUNCTION OF THE LAST SEEN UID
        val results = do_prediction(shuffled.filter(x => retrieve_duid(x) > 1) , predict_as_last_uid)
        results.cache()

        case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)

        val summary = results.map(x => Summary(x("match"), x("t_to_last"), x("nflips"), x("d_uid"), x("truth"), x("guess")))


        //PROBLEMATIC LINE
        val sum_df = summary.toDF()

    }
    }

我总是得到:

value toDF 不是 org.apache.spark.rdd.RDD[Summary] 的成员

value toDF is not a member of org.apache.spark.rdd.RDD[Summary]

现在有点丢失了.有什么想法吗?

Bit lost now. Any ideas?

推荐答案

将案例类移到 main 之外:

Move your case class outside of main:

object Foo {

  case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)

  def main(args: Array[String]){
    ...
  }

}

关于它的范围的一些事情阻止了 Spark 能够处理 Summary 模式的自动派生.仅供参考,我实际上从 sbt 得到了不同的错误:

Something about the scoping of it is preventing Spark from being able to handle the automatic derivation of the schema for Summary. FYI I actually got a different error from sbt:

没有可用于摘要的 TypeTag

No TypeTag available for Summary

这篇关于值 toDF 不是 org.apache.spark.rdd.RDD 的成员的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆