toDF不能在spark scala ide中工作,但可以在spark-shell中完美运行 [英] toDF is not working in spark scala ide , but works perfectly in spark-shell

查看:85
本文介绍了toDF不能在spark scala ide中工作,但可以在spark-shell中完美运行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是Spark的新手,我正在尝试从spark-shell和spark scala eclipse ide运行以下命令

I am new to Spark and I am trying to run the below commands both from spark-shell and spark scala eclipse ide

当我从shell中运行它时,它可以完美运行.

When I ran it from shell , it perfectly works .

但是在ide中,它给出了编译错误.请帮助

But in ide , it gives the compilation error. Please help

    package sparkWCExample.spWCExample

    import org.apache.log4j.Level
    import org.apache.spark.sql.{ Dataset, SparkSession, DataFrame, Row }
    import org.apache.spark.sql.functions._
    import org.apache.spark.SparkContext
    import org.apache.spark.SparkConf
    import org.apache.spark.sql._

    object TwitterDatawithDataset {
      def main(args: Array[String]) {
        val conf = new SparkConf()
            .setAppName("Spark Scala WordCount Example")
            .setMaster("local[1]")
        val spark = SparkSession.builder()
            .config(conf)
            .appName("CsvExample")
            .master("local")
            .getOrCreate()
        val csvData = spark.sparkContext
            .textFile("C:\\Sankha\\Study\\data\\bank_data.csv", 3)

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
        case class Bank(age: Int, job: String)
        val bankDF = dfData.map(x => Bank(x(0).toInt, x(1)))
        val df = bankDF.toDF()
      }
    }

以下是关于编译时本身的异常

Exception is as below on compile time itself

描述资源路径位置类型值toDF不是org.apache.spark.rdd.RDD [Bank]的成员TwitterDatawithDataset.scala/spWCExample/src/main/java/sparkWCExample/spWCExample第35行Scala问题

Description Resource Path Location Type value toDF is not a member of org.apache.spark.rdd.RDD[Bank] TwitterDatawithDataset.scala /spWCExample/src/main/java/sparkWCExample/spWCExample line 35 Scala Problem

推荐答案

toDF(),必须启用隐式转换:

To toDF(), you must enable implicit conversions:

import spark.implicits._

spark-shell 中,默认情况下启用了它,这就是代码在此处起作用的原因.:imports 命令可用于查看您的shell中已经存在哪些导入:

In spark-shell, it is enabled by default and that's why the code works there. :imports command can be used to see what imports are already present in your shell:

scala> :imports
 1) import org.apache.spark.SparkContext._ (70 terms, 1 are implicit)
 2) import spark.implicits._       (1 types, 67 terms, 37 are implicit)
 3) import spark.sql               (1 terms)
 4) import org.apache.spark.sql.functions._ (385 terms)

这篇关于toDF不能在spark scala ide中工作,但可以在spark-shell中完美运行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆