如何解决AnalysisException:Spark中已解决的属性 [英] How to resolve the AnalysisException: resolved attribute(s) in Spark

查看:545
本文介绍了如何解决AnalysisException:Spark中已解决的属性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

val rdd = sc.parallelize(Seq(("vskp", Array(2.0, 1.0, 2.1, 5.4)),("hyd",Array(1.5, 0.5, 0.9, 3.7)),("hyd", Array(1.5, 0.5, 0.9, 3.2)),("tvm", Array(8.0, 2.9, 9.1, 2.5))))
val df1= rdd.toDF("id", "vals")
val rdd1 = sc.parallelize(Seq(("vskp","ap"),("hyd","tel"),("bglr","kkt")))
val df2 = rdd1.toDF("id", "state")
val df3 = df1.join(df2,df1("id")===df2("id"),"left")

加入操作正常 但是当我重用df2时,我面临着无法解析的属性错误

The join operation works fine but when I reuse the df2 I am facing unresolved attributes error

val rdd2 = sc.parallelize(Seq(("vskp", "Y"),("hyd", "N"),("hyd", "N"),("tvm", "Y")))
val df4 = rdd2.toDF("id","existance")
val df5 = df4.join(df2,df4("id")===df2("id"),"left")

错误:org.apache.spark.sql.AnalysisException:已解析属性ID#426

ERROR: org.apache.spark.sql.AnalysisException: resolved attribute(s)id#426

推荐答案

正如我的评论中所述,它与 https://issues.apache.org/jira/browse/SPARK-10925 ,更具体地说,是 https://issues.apache.org/jira/browse/SPARK-14948 .重复使用引用会在命名方面造成歧义,因此您必须克隆df-请参见 https://issues.apache.org/jira/browse/SPARK-14948 .

As mentioned in my comment, it is related to https://issues.apache.org/jira/browse/SPARK-10925 and, more specifically https://issues.apache.org/jira/browse/SPARK-14948. Reuse of the reference will create ambiguity in naming, so you will have to clone the df - see the last comment in https://issues.apache.org/jira/browse/SPARK-14948 for an example.

这篇关于如何解决AnalysisException:Spark中已解决的属性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆