java.io.IOException: 无法在 Hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe.在 Windows 7 上触发 Eclipse [英] java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. spark Eclipse on windows 7

查看:23
本文介绍了java.io.IOException: 无法在 Hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe.在 Windows 7 上触发 Eclipse的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我无法在安装在 Windows 7

Scala IDE(Maven spark 项目)中运行简单的 spark 作业>

已添加 Spark 核心依赖项.

val conf = new SparkConf().setAppName("DemoDF").setMaster("local")val sc = 新的 SparkContext(conf)val logData = sc.textFile("文件.txt")logData.count()

错误:

16/02/26 18:29:33 INFO SparkContext:在 FrameDemo.scala:13 从 textFile 创建广播 016/02/26 18:29:34 错误外壳:无法在 hadoop 二进制路径中找到 winutils 二进制文件java.io.IOException: 无法在 Hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe.在 org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)在 org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)在 org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)在 org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)在 org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)在 <br>org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)在 org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)在 <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)在<br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)<br>在 scala.Option.map(Option.scala:145)
在 org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)<br>在 org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)<br>在 org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>在 org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>在 scala.Option.getOrElse(Option.scala:120)
在 org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>在 org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)<br>在 org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>在 org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>在 scala.Option.getOrElse(Option.scala:120)
在 org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>在 org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)<br>在 org.apache.spark.rdd.RDD.count(RDD.scala:1143)<br>在 com.org.SparkDF.FrameDemo$.main(FrameDemo.scala:14)<br>在 com.org.SparkDF.FrameDemo.main(FrameDemo.scala)<br>

解决方案

这里 很好地解释了您的解决方案问题.

  1. http://public-repo 下载 winutils.exe-1.hortonworks.com/hdp-win-alpha/winutils.exe.
  2. 在操作系统级别或以编程方式设置您的 HADOOP_HOME 环境变量:

    System.setProperty("hadoop.home.dir", "winutils 所在文件夹的完整路径");

  3. 享受

I'm not able to run a simple spark job in Scala IDE (Maven spark project) installed on Windows 7

Spark core dependency has been added.

val conf = new SparkConf().setAppName("DemoDF").setMaster("local")
val sc = new SparkContext(conf)
val logData = sc.textFile("File.txt")
logData.count()

Error:

16/02/26 18:29:33 INFO SparkContext: Created broadcast 0 from textFile at FrameDemo.scala:13
16/02/26 18:29:34 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
    at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
    at <br>org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)<br>
    at scala.Option.map(Option.scala:145)<br>
    at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)<br>
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>
    at scala.Option.getOrElse(Option.scala:120)<br>
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>
    at scala.Option.getOrElse(Option.scala:120)<br>
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)<br>
    at org.apache.spark.rdd.RDD.count(RDD.scala:1143)<br>
    at com.org.SparkDF.FrameDemo$.main(FrameDemo.scala:14)<br>
    at com.org.SparkDF.FrameDemo.main(FrameDemo.scala)<br>

解决方案

Here is a good explanation of your problem with the solution.

  1. Download winutils.exe from http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe.
  2. SetUp your HADOOP_HOME environment variable on the OS level or programmatically:

    System.setProperty("hadoop.home.dir", "full path to the folder with winutils");

  3. Enjoy

这篇关于java.io.IOException: 无法在 Hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe.在 Windows 7 上触发 Eclipse的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆