Spark Windows安装Java错误 [英] Spark Windows Installation Java Error

查看:288
本文介绍了Spark Windows安装Java错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我继续以前的问题 - winutils spark windows安装
- 我知道这个线程 - 如何在Windows上启动Spark应用程序(又名为Spark为NullPointerException失败)? - - 但是我还没有找到任何修复我的问题的东西。

I am continuing a question from this previous question - winutils spark windows installation - and I am aware of this thread - How to start Spark applications on Windows (aka Why Spark fails with NullPointerException)? -, but I haven't found anything that fixes my problem yet.

我也知道建议使用maven或sbt从源代码构建spark。我不想这样做,因为很多人不会从源代码构建火花,它对他们来说可以正常工作。

I am also aware that it has been recommended to build spark from source code with maven or sbt. I do not want to do that yet, as lot's of people do not build spark from source and it works fine for them.

到目前为止,我已经设置了以下环境变量...

So far I have set the following environment variables...

set _JAVA_OPTIONS=-Xmx512M -Xms512M
set _JAVA_OPTION=-Xmx512M -Xms512M
set SPARK_HOME=C:\spark\spark161binhadoop26\bin
set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_51

::this used to be C:\winutils, but I moved it based on a suggestion
set HADOOP_HOME=C:\spark\spark161binhadoop26\bin

::the scala version here is 2.11.8
set SCALA_HOME=C:\scala\bin

::trying to get through the last warning. The one regarding no IP address
set SPARK_LOCAL_HOSTNAME=localhost

我已经运行了命令从winutils目录)

and I have ran the command (from the directory of winutils)

>winutils.exe chmod 777 /tmp/hive

我收到的错误是

Picked up _JAVA_OPTIONS: -Xmx512M -Xms512M
Picked up _JAVA_OPTIONS: -Xmx512M -Xms512M
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.6.1
      /_/

Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_51)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
16/05/19 17:30:21 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/spark/spark161binhadoop26/bin/../lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/spark/spark161binhadoop26/lib/datanucleus-api-jdo-3.2.6.jar."
16/05/19 17:30:21 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/spark/spark161binhadoop26/bin/../lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/spark/spark161binhadoop26/lib/datanucleus-rdbms-3.2.9.jar."
16/05/19 17:30:21 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/spark/spark161binhadoop26/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/spark/spark161binhadoop26/bin/../lib/datanucleus-core-3.2.10.jar."
16/05/19 17:30:21 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/05/19 17:30:21 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/05/19 17:31:17 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
16/05/19 17:31:17 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
16/05/19 17:31:18 WARN : Your hostname, DELE5450-16 resolves to a loopback/non-reachable address: fe80:0:0:0:4c1a:cec3:2cd3:a90a%eth13, but we couldn't find any external IP address!
java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
        at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204)
        at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)
        at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:218)
        at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:208)
        at org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:462)
        at org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:461)
        at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40)
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:330)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
        at $iwC$$iwC.<init>(<console>:15)
        at $iwC.<init>(<console>:24)
        at <init>(<console>:26)
        at .<init>(<console>:30)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
        at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
        at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
        at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
        at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
        at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
        at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:612)
        at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
        ... 62 more

<console>:16: error: not found: value sqlContext
         import sqlContext.implicits._
                ^
<console>:16: error: not found: value sqlContext
         import sqlContext.sql

编辑:我更新了错误消息更改winutils的路径不在叶目录bin之外。

I updated the error message after changing the path of winutils to be outside of the leaf directory "bin".

有没有人有任何建议?

推荐答案

我实际上重新下载了winutils.exe,并将其放在c:/winutils.exe的另一个目录中。我可能下载了错误的一个,因为重要的是你是否下载了64位的一个和32位的一个。如果更改目录有助于我,我不会太惊讶,但我不知道如何。
请参考我的第一个链接winutils spark windows安装为正确的winutils.exe下载从github的负载

I actually re downloaded the winutils.exe and I left it in another directory at c:/winutils.exe. I might have downloaded the wrong one because it matters whether you download the 64 bit one vs the 32 bit one. I wouldn't be too surprised if changing the directory helped but im not sure how it would. Please reference my first link "winutils spark windows installation" for the correct winutils.exe down loads from github

这篇关于Spark Windows安装Java错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆