“错误:类型不匹配";在Spark中具有相同的找到和必需的数据类型 [英] "error: type mismatch" in Spark with same found and required datatypes

查看:124
本文介绍了“错误:类型不匹配";在Spark中具有相同的找到和必需的数据类型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用spark-shell来运行我的代码.在我的代码中,我定义了一个函数,并使用其参数调用该函数.

I am using spark-shell for running my code. In my code, I have defined a function and I call that function with its parameters.

问题是调用该函数时出现以下错误.

The problem is that I get the below error when I call the function.

error: type mismatch;

found   : org.apache.spark.graphx.Graph[VertexProperty(in class $iwC)(in class $iwC)(in class $iwC)(in class $iwC),String]

required: org.apache.spark.graphx.Graph[VertexProperty(in class $iwC)(in class $iwC)(in class $iwC)(in class $iwC),String]

此错误背后的原因是什么?它与Spark中的Graph数据类型有关系吗?

What is the reason behind this error? Has it got anything to do with Graph datatype in Spark?

代码:这是我代码的一部分,涉及函数"countpermissions"的定义和调用.

Code : This is the part of my code which involves the definition and call of the function "countpermissions".

class VertexProperty(val id:Long) extends Serializable
case class User(val userId:Long, val userCode:String, val Name:String, val Surname:String) extends VertexProperty(userId)
case class Entitlement(val entitlementId:Long, val name:String) extends VertexProperty(entitlementId)

def countpermissions(es:String, sg:Graph[VertexProperty,String]):Long = {
    return 0
}

val triplets = graph.triplets
val temp = triplets.map(t => t.attr)
val distinct_edge_string = temp.distinct    
var bcast_graph = sc.broadcast(graph)        
val edge_string_subgraph = distinct_edge_string.map(es => es -> bcast_graph.value.subgraph(epred = t => t.attr == es))
val temp1 = edge_string_subgraph.map(t => t._1 -> countpermissions(t._1, t._2))

代码运行到最后一行,直到出现上面提到的错误为止,没有错误.

The code runs without errors until the last line, where it gets the above mentioned error.

推荐答案

这就是诀窍.让我们打开REPL并定义一个类:

Here is the trick. Lets open the REPL and define a class:

scala> case class Foo(i: Int)
defined class Foo

和一个在此类上运行的简单函数:

and a simple function which operates on this class:

scala> def fooToInt(foo: Foo) = foo.i
fooToInt: (foo: Foo)Int

重新定义课程:

scala> case class Foo(i: Int)
defined class Foo

并创建一个实例:

scala> val foo = Foo(1)
foo: Foo = Foo(1)

剩下的就是打电话给fooToInt:

scala> fooToInt(foo)
<console>:34: error: type mismatch;
 found   : Foo(in class $iwC)(in class $iwC)(in class $iwC)(in class $iwC)
 required: Foo(in class $iwC)(in class $iwC)(in class $iwC)(in class $iwC)
          fooToInt(foo)

它看起来很熟悉吗?更好地了解发生了什么的另一招:

Does it look familiar? Yet another trick to get a better idea what is going on:

scala> case class Foo(i: Int)
defined class Foo

scala> val foo = Foo(1)
foo: Foo = Foo(1)

scala> case class Foo(i: Int)
defined class Foo

scala> def fooToInt(foo: Foo) = foo.i
<console>:31: error: reference to Foo is ambiguous;
it is imported twice in the same scope by
import INSTANCE.Foo
and import INSTANCE.Foo
         def fooToInt(foo: Foo) = foo.i

长话短说,这是一个预期的行为,尽管有些混乱,但它是由相同范围中存在的不明确定义引起的.

So long story short this is an expected, although slightly confusing, behavior which arises from ambiguous definitions existing in the same scope.

除非要定期:reset REPL状态,否则应跟踪创建的实体,并且如果类型定义发生更改,请确保在继续操作之前,没有模棱两可的定义存在(如果需要,可以覆盖内容).

Unless you want to periodically :reset REPL state you should keep track of entities you create and if types definitions change make sure that no ambiguous definitions persist (overwrite things if needed) before you proceed.

这篇关于“错误:类型不匹配";在Spark中具有相同的找到和必需的数据类型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆