为什么不接受 Scala 的 Symbol 作为列引用? [英] Why is Scala's Symbol not accepted as a column reference?
本文介绍了为什么不接受 Scala 的 Symbol 作为列引用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
尝试 Spark SQL 的示例,它们似乎运行良好,除非需要表达式:
Trying the examples of Spark SQL, they seem to work well except when expressions are needed:
scala> val teenagers = people.where('age >= 10).where('age <= 19).select('name)
<console>:23: error: value >= is not a member of Symbol
val teenagers = people.where('age >= 10).where('age <= 19).select('name)
scala> val teenagers = people.select('name)
<console>:23: error: type mismatch;
found : Symbol
required: org.apache.spark.sql.catalyst.expressions.Expression
val teenagers = people.select('name)
看来我需要一个没有记录的导入.
It seems that I need an import not documented.
如果我批量导入所有内容
If I bulk importing everything
import org.apache.spark.sql.catalyst.analysis._
import org.apache.spark.sql.catalyst.dsl._
import org.apache.spark.sql.catalyst.errors._
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.rules._
import org.apache.spark.sql.catalyst.types._
import org.apache.spark.sql.catalyst.util._
import org.apache.spark.sql.execution
import org.apache.spark.sql.hive._
...和
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
它有效.
推荐答案
您缺少一个隐式转换.
val sqlContext: org.apache.spark.sql.SQLContext = ???
import sqlContext._
然而,在最近(和受支持的)Spark 版本中,情况发生了变化.
That has however changed in the recent (and supported) versions of Spark.
这篇关于为什么不接受 Scala 的 Symbol 作为列引用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文