Hadoop Windows安装程序.运行WordCountJob时出错:“任何本地目录中没有可用空间" [英] Hadoop Windows setup. Error while running WordCountJob: "No space available in any of the local directories"

查看:89
本文介绍了Hadoop Windows安装程序.运行WordCountJob时出错:“任何本地目录中没有可用空间"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在关注此视频教程,试图在我的机器上设置hadoop.

  • 配置信息:

    这是我的配置文件:

    core-site.xml

     < configuration><属性>< name> fs.defaultFS</name>< value> hdfs://localhost:9000</value></property></configuration> 

    hdfs-site.xml

     < configuration><属性><名称> dfs.replication</名称><值> 1</值></property><属性>< name> dfs.namenode.name.dir</name><值>文件:///C:/hadoop-2.8.0/data/namenode</值></property><属性><名称> dfs.datanode.data.dir</名称><值>文件:///C:/hadoop-2.8.0/data/datanode</值></property></configuration> 

    mapred-site.xml

     < configuration><属性>< name> mapreduce.framework.name</name><值>纱线</值></property></configuration> 

    yarn-site.xml

     < configuration><属性><名称> yarn.nodemanager.aux-services</名称>< value> mapreduce_shuffle</value></property><属性><名称> yarn.nodemanager.auxservices.mapreduce.shuffle.class</名称>< value> org.apache.hadoop.mapred.ShuffleHandler</value></property><属性><名称> yarn.nodemanager.disk-health-checker.enable</名称>< value> false</value></property></configuration> 

    这是我执行jar的方式(准备输入/输出目录):

      hadoop fs -mkdir/tophadoop fs -mkdir/top/输入hadoop fs -mkdir/top/输出hadoop -put C:/hadoop-2.8.0/wordcount2.txt/top/inputhadoop jar C:/hadoop-2.8.0/WordCount.jar/top/input/wordcount2.txt/top/output/output.txt 

    解决方案

    主要错误是:

    org.apache.hadoop.util.DiskChecker $ DiskErrorException:任何本地目录中没有可用空间.

    为了解决此问题,您可以尝试:

    (1)更改Hdfs-site.xml中的目录格式

    hdfs-site.xml 文件中,尝试替换以下值:

     < configuration><属性><名称> dfs.replication</名称><值> 1</值></property><属性>< name> dfs.namenode.name.dir</name><值>文件:///C:/hadoop-2.8.0/data/namenode</值></property><属性><名称> dfs.datanode.data.dir</名称><值>文件:///C:/hadoop-2.8.0/data/datanode</值></property></configuration> 

     < configuration><属性><名称> dfs.replication</名称><值> 1</值></property><属性>< name> dfs.namenode.name.dir</name><值> C:\ hadoop-2.8.0 \ data \ namenode</value></property><属性><名称> dfs.datanode.data.dir</名称><值> C:\ hadoop-2.8.0 \ data \ datanode</value></property></configuration> 

    (2)目录的读写权限

    检查当前用户是否具有读写hadoop目录的权限.

    (3)节点管理器目录

    尝试将以下属性添加到 yarn-site.xml 文件中:

     < property><名称> yarn.nodemanager.local-dirs</名称><值> C:/hadoop-2.8.0/yarn/local</value></property><属性>< name> yarn.nodemanager.log-dirs</name><值> C:/hadoop-2.8.0/yarn/logs//value></property> 


    更改目录后,尝试格式化namenode.

    如果仍然无法正常工作,则可以参考以下逐步指南在Windows上安装Hadoop,它对我来说很好:

    I've setup it successfuly: no errors while executing start-all.xml from sbin directory.

    But when I am trying to execute my WordCount.jar file there is an error ocurred:

    2/23 11:42:59 INFO localizer.ResourceLocalizationService: Created localizer for container_1550911199370_0001_02_000001
    19/02/23 11:42:59 INFO localizer.ResourceLocalizationService: Localizer failed
    org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
            at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:399)
            at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:151)
            at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:132)
            at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:116)
            at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:545)
            at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1142)
    19/02/23 11:42:59 ERROR nodemanager.DeletionService: Exception during execution of task in DeletionService
    java.lang.NullPointerException: path cannot be null
            at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
            at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:281)
            at org.apache.hadoop.fs.FileContext.delete(FileContext.java:769)
            at org.apache.hadoop.yarn.server.nodemanager.DeletionService$FileDeletionTask.run(DeletionService.java:273)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
            at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
    19/02/23 11:42:59 INFO container.ContainerImpl: Container container_1550911199370_0001_02_000001 transitioned from LOCAL
    

    I am sure that I have enough space for processing job. My system is fresh installed:

    Configuration info:

    Here is my configuration files:

    core-site.xml

    <configuration>
       <property>
           <name>fs.defaultFS</name>
           <value>hdfs://localhost:9000</value>
       </property>
    </configuration>
    

    hdfs-site.xml

    <configuration>
       <property>
           <name>dfs.replication</name>
           <value>1</value>
       </property>
       <property>
           <name>dfs.namenode.name.dir</name>
           <value>file:///C:/hadoop-2.8.0/data/namenode</value>
       </property>
       <property>
           <name>dfs.datanode.data.dir</name>
           <value>file:///C:/hadoop-2.8.0/data/datanode</value>
       </property>
    </configuration>
    

    mapred-site.xml

    <configuration>
       <property>
           <name>mapreduce.framework.name</name>
           <value>yarn</value>
       </property>
    </configuration>
    

    yarn-site.xml

    <configuration>
       <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
       </property>
       <property>
            <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>  
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
       </property>
       <property>
       <name>yarn.nodemanager.disk-health-checker.enable</name>
       <value>false</value>
    </property>
    </configuration>
    

    Here is how I am executing the jar (with preparing input/output dirs):

    hadoop fs -mkdir /top
    hadoop fs -mkdir /top/input
    hadoop fs -mkdir /top/output
    hadoop -put C:/hadoop-2.8.0/wordcount2.txt /top/input
    hadoop jar C:/hadoop-2.8.0/WordCount.jar /top/input/wordcount2.txt /top/output/output.txt
    

    解决方案

    The main error is:

    org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.

    In order to fix this issue you can try to:

    (1) Change directory format in Hdfs-site.xml

    In the hdfs-site.xml file try replacing the following values:

    <configuration>
       <property>
           <name>dfs.replication</name>
           <value>1</value>
       </property>
       <property>
           <name>dfs.namenode.name.dir</name>
           <value>file:///C:/hadoop-2.8.0/data/namenode</value>
       </property>
       <property>
           <name>dfs.datanode.data.dir</name>
           <value>file:///C:/hadoop-2.8.0/data/datanode</value>
       </property>
    </configuration>
    

    with

    <configuration>
       <property>
           <name>dfs.replication</name>
           <value>1</value>
       </property>
       <property>
           <name>dfs.namenode.name.dir</name>
           <value>C:\hadoop-2.8.0\data\namenode</value>
       </property>
       <property>
           <name>dfs.datanode.data.dir</name>
           <value>C:\hadoop-2.8.0\data\datanode</value>
       </property>
    </configuration>
    

    (2) Directories read & write permissions

    Check that the current user has the permission to read and write into the hadoop directory.

    (3) Node manager directories

    Try adding the following properties into yarn-site.xml file:

    <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>C:/hadoop-2.8.0/yarn/local</value>
    </property>
    <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>C:/hadoop-2.8.0/yarn/logs</value>
    </property>
    


    After changing the directories, try to format the namenode.

    If it still doesn't work, you can refer to the following step by step guide to install Hadoop on windows, it works fine for me:

    这篇关于Hadoop Windows安装程序.运行WordCountJob时出错:“任何本地目录中没有可用空间"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆