为什么 spark-shell 会因 NullPointerException 而失败? [英] Why spark-shell fails with NullPointerException?

查看:51
本文介绍了为什么 spark-shell 会因 NullPointerException 而失败?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试在 Windows 10 上执行 spark-shell,但每次运行时都会出现此错误.

I try to execute spark-shell on Windows 10, but I keep getting this error every time I run it.

我使用了最新版本和 spark-1.5.0-bin-hadoop2.4 版本.

I used both latest and spark-1.5.0-bin-hadoop2.4 versions.

15/09/22 18:46:24 WARN Connection: BoneCP specified but not present in     
CLASSPATH (or one of dependencies)
15/09/22 18:46:24 WARN Connection: BoneCP specified but not present in                 CLASSPATH (or one of dependencies)
15/09/22 18:46:27 WARN ObjectStore: Version information not found in    
metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/09/22 18:46:27 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
15/09/22 18:46:27 WARN : Your hostname, DESKTOP-8JS2RD5 resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:c0a8:103%net1, but we couldn't find any external IP address!
java.lang.RuntimeException: java.lang.NullPointerException
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
    at org.apache.spark.sql.hive.client.ClientWrapper.<init>    (ClientWrapper.scala:171)
    at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala    :163)
    at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)
    at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
    at java.lang.reflect.Constructor.newInstance(Unknown Source)
    at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
    at $iwC$$iwC.<init>(<console>:9)
    at $iwC.<init>(<console>:18)
    at <init>(<console>:20)
    at .<init>(<console>:24)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.sca      la:108)
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$proc      ess$1.apply$mcZ$sp(SparkILoop.scala:991)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$proc      ess$1.apply(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$proc      ess$1.apply(SparkILoop.scala:945)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scal      a:135)
    at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  Caused by: java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(Unknown Source)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
    at org.apache.hadoop.util.Shell.run(Shell.java:418)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
    at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:559)
    at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534)
   org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)

org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)... 56 更多

org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) ... 56 more

  <console>:10: error: not found: value sqlContext
               import sqlContext.implicits._
                ^
  <console>:10: error: not found: value sqlContext
               import sqlContext.sql
                ^

推荐答案

我将 Spark 1.5.2 与 Hadoop 2.6 一起使用,并遇到了类似的问题.通过执行以下步骤解决:

I used Spark 1.5.2 with Hadoop 2.6 and had similar problems. Solved by doing the following steps:

  1. winutils.exe"noreferrer">repository 到一些本地文件夹,例如C:hadoopin.

  1. Download winutils.exe from the repository to some local folder, e.g. C:hadoopin.

HADOOP_HOME 设置为 C:hadoop.

创建 c: mphive 目录(使用 Windows 资源管理器或任何其他工具).

Create c: mphive directory (using Windows Explorer or any other tool).

使用管理员权限打开命令提示符.

Open command prompt with admin rights.

运行C:hadoopinwinutils.exe chmod 777/tmp/hive

这样,我仍然收到一些警告,但没有错误并且可以很好地运行 Spark 应用程序.

With that, I am still getting some warnings, but no ERRORs and can run Spark applications just fine.

这篇关于为什么 spark-shell 会因 NullPointerException 而失败?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆