在Windows 10中首次运行Hadoop时datanode执行时出错 [英] ERROR in datanode execution while running Hadoop first time in Windows 10
问题描述
我正在尝试在Windows 10计算机上运行Hadoop 3.1.1.我修改了所有文件:
I am trying to run Hadoop 3.1.1 in my Windows 10 machine. I modified all the files:
- hdfs-site.xml
- mapred-site.xml
- core-site.xml
- yarn-site.xml
然后,我执行了以下命令:
Then, I executed the following command:
C:\hadoop-3.1.1\bin> hdfs namenode -format
该格式正确运行,所以我指示C:\hadoop-3.1.1\sbin
执行以下命令:
The format ran correctly so I directed to C:\hadoop-3.1.1\sbin
to execute the following command:
C:\hadoop-3.1.1\sbin> start-dfs.cmd
命令提示符将打开2个新窗口:一个用于datanode,另一个用于namenode.
The command prompt opens 2 new windows: one for datanode and another for namenode.
namenode窗口继续运行:
The namenode window keeps running:
2018-09-02 21:37:06,232 INFO ipc.Server: IPC Server Responder: starting
2018-09-02 21:37:06,232 INFO ipc.Server: IPC Server listener on 9000: starting
2018-09-02 21:37:06,247 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
2018-09-02 21:37:06,247 INFO namenode.FSNamesystem: Starting services required for active state
2018-09-02 21:37:06,247 INFO namenode.FSDirectory: Initializing quota with 4 thread(s)
2018-09-02 21:37:06,247 INFO namenode.FSDirectory: Quota initialization completed in 3 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
2018-09-02 21:37:06,279 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
当datanode出现以下错误时:
While the datanode gives following error:
ERROR: datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:220)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2762)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2677)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2719)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2863)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2887)
2018-09-02 21:37:04,250 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
2018-09-02 21:37:04,250 INFO datanode.DataNode: SHUTDOWN_MSG:
然后,datanode关闭!我尝试了几种方法来克服此错误,但这是我第一次在Windows上安装Hadoop,不知道下一步该怎么做!
And then, the datanode shuts down! I tried several ways to overcome this error, but this is first time I am installing Hadoop on windows and can't understand what to do next!
推荐答案
在删除hdfs-site.xml中datanode的文件系统引用后,我的工作正常了.我发现启用该软件可以创建和初始化自己的datanode,然后将其弹出到sbin中.之后,我可以毫无障碍地使用hdfs.这是Windows上适用于Hadoop 3.1.3的内容:
I got things working, after I removed the file system reference for the datanode in hdfs-site.xml. I found that enabled the software to create and initialise its own datanode, which then popped up in sbin. After that I could use hdfs without a hitch. Here is what worked for me for Hadoop 3.1.3 on windows:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///C:/Users/myusername/hadoop/hadoop-3.1.3/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>datanode</value>
</property>
</configuration>
干杯, MV
这篇关于在Windows 10中首次运行Hadoop时datanode执行时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!