Hadoop伪分布式模式 - Datanode和tasktracker不启动 [英] Hadoop pseudo distributed mode - Datanode and tasktracker not starting

查看:110
本文介绍了Hadoop伪分布式模式 - Datanode和tasktracker不启动的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我运行安装了Hadoop 1.1.2的红帽企业Linux服务器版本6.4(圣地亚哥)发行版。我已经完成了所需的配置以启用伪分布式模式。但是在试图运行hadoop时,datanode和tasktracker无法启动。



我无法将任何文件复制到hdfs。

  [hduser @ is-joshbloom-hadoop hadoop] $ hadoop dfs -put README.txt / input 
警告:$ HADOOP_HOME已弃用。

13/05/23 16:42:00 WARN hdfs.DFSClient:DataStreamer异常:org.apache.hadoop.ipc.RemoteException:java.io.IOException:只能将文件/输入复制到0节点,而不是1

在尝试 hadoop-daemon.sh start datanode 我收到消息:

 启动datanode,记录到/ usr / local / hadoop / libexec /../logs/hadoop-hduser-datanode-is-joshbloom-hadoop.out 

同样如此为tasktracker。但是,当我对namenode,secondarynamenode,jobtracker尝试相同的命令时,它们似乎正在运行。

  namenode作为进程32933运行。先停下来

我尝试了以下解决方案:


  1. 重新设置namenode
  2. 重新安装hadoop

  3. 安装不同版本的hadoop(1.0.4)

没有任何效果。我已经在我的Mac和亚马逊Ubuntu VM上遵循了相同的安装步骤,并且完美地工作。



我如何获得hadoop的工作?
Thanks!



* UPDATE **



这里是namenode的日志条目

  2013-05-23 16:27:44,087 INFO org.apache。 hadoop.hdfs.server.datanode.DataNode:STARTUP_MSG:
/ ********************************* ***************************
STARTUP_MSG:启动DataNode
STARTUP_MSG:host = java.net.UnknownHostException:is -joshbloom-hadoop:is-joshbloom-hadoop
STARTUP_MSG:args = []
STARTUP_MSG:version = 1.1.2
STARTUP_MSG:build = https://svn.apache.org/repos /asf/hadoop/common/branches/branch-1.1 -r 1440782;由'hortonfo'于2013年1月31日02:03:24编辑
****************************** ****************************** /
2013-05-23 16:27:44,382 INFO org.apache。 hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2013-05-23 16:27:44,432 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean for source MetricsSystem,sub =统计已注册。
2013-05-23 16:27:44,446错误org.apache.hadoop.metrics2.impl.MetricsSystemImpl:获取本地主机名称时出错。使用'localhost'...
java.net.UnknownHostException:is-joshbloom-hadoop:is-joshbloom-hadoop
at java.net.InetAddress.getLocalHost(InetAddress.java:1438)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:394)$ b $ org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:463)
b org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:390)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:152)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:133)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNo de.java:1589)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608)
at org.apache.hadoop.hdfs.server.datanode。 DataNode.secureMain(DataNode.java:1734)
位于org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)
导致:java.net.UnknownHostException:在java.net.Inet4AddressImpl.lookupAllHostAddr(本地方法)
在java.net.InetAddress $ 1.lookupAllHostAddr(InetAddress.java:866)
在java.net处是is-joshbloom-hadoop
。 InetAddress.getAddressesFromNameService(InetAddress.java:1258)
at java.net.InetAddress.getLocalHost(InetAddress.java:1434)
... 11 more
2013-05-23 16:27 :44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:计划的10秒快照周期。
2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl:DataNode度量系统启动
INFO org.apache。 hadoop.metrics2.impl.MetricsSourceAdapter:注册源ugi的MBean。
2013-05-23 16:27:44,914 INFO org.apache.hadoop.util.NativeCodeLoader:加载了native-hadoop库
2013-05-23 16:27:45,212错误org.apache。 hadoop.hdfs.server.datanode.DataNode:java.net.UnknownHostException:is-joshbloom-hadoop:is-joshbloom-hadoop
at java.net.InetAddress.getLocalHost(InetAddress.java:1438)
在org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:271)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:289)
at org.apache .hadoop.hdfs.server.datanode.DataNode。< init>(DataNode.java:301)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java :1608)
在org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNod
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)
引起:java.net.UnknownHostException:is-joshbloom- hadoop
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)$ b $ at java.net.InetAddress $ 1.lookupAllHostAddr(InetAddress.java:866)
at java.net.InetAddress.getAddressesFromNameService( InetAddress.java:1258)
at java.net.InetAddress.getLocalHost(InetAddress.java:1434)
... 8 more

2013-05-23 16:27 :45,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:SHUTDOWN_MSG:
/ ************************** **********************************
SHUTDOWN_MSG:关闭java.net.UnknownHostException处的DataNode: is-joshbloom-hadoop:is-joshbloom-hadoop
*********************************** ************************* /




$ b

/ etc / 的内容

hosts

  127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6


解决方案修改您的 / etc / hosts 以包含主机名环回映射:

  127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 
127.0.1.1 is-joshbloom-hadoop
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6

你的问题是你的机器不工作不知道如何将主机名 is-joshbloom-hadoop 解析为特定的IP地址。通常有两个地方/方法解决方案发生 - 通过DNS服务器或使用本地主机文件(主机文件优先)。

以上对主机文件的修改允许您机器将计算机名称 is-joshbloom-hadoop 解析为IP地址 127.0.1.1 。 OS的内部环回地址范围 127.0.0.0/8 ,所以你可以在这里命名任何地址。在我的Ubuntu笔记本电脑上,它使用 127.0.1.1 ,并且我确信它在操作系统之间会有变化,但我的猜测是不使用 127.0.0.1 如果您将来更改机器名称,则不必在本地主机行中搜索它。


I am running a Red Hat Enterprise Linux Server release 6.4 (Santiago) distribution with Hadoop 1.1.2 installed on it. I have made the required configurations to enable the pseudo distributed mode. But on trying to run hadoop, the datanode and tasktracker don't start.

I am not able to copy any files to hdfs.

[hduser@is-joshbloom-hadoop hadoop]$ hadoop dfs -put README.txt /input
Warning: $HADOOP_HOME is deprecated.

13/05/23 16:42:00 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /input could only be replicated to 0 nodes, instead of 1

Also after trying hadoop-daemon.sh start datanode I get the message:

starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-is-joshbloom-hadoop.out

same goes for tasktracker. But when I try the same command for namenode, secondarynamenode, jobtracker they are seem to be running.

namenode running as process 32933. Stop it first. 

I tried the following solutions:

  1. Reformatting namenode
  2. Reinstalling hadoop
  3. Installing different version of hadoop (1.0.4)

None seem to work. I have followed the same installation steps on my Mac and on amazon ubuntu VM and it works perfectly.

How can I get hadoop working? Thanks!

*UPDATE**

Here is the log entry of namenode

2013-05-23 16:27:44,087 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.1.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
2013-05-23 16:27:44,382 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-05-23 16:27:44,432 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-05-23 16:27:44,446 ERROR org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Error getting localhost name. Using 'localhost'...
java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop
        at java.net.InetAddress.getLocalHost(InetAddress.java:1438)
        at     org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:463)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:394)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:390)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:152)
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:133)
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40)
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1589)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)
Caused by: java.net.UnknownHostException: is-joshbloom-hadoop
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
        at java.net.InetAddress.getLocalHost(InetAddress.java:1434)
        ... 11 more
2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled     snapshot period at 10 second(s).
2013-05-23 16:27:44,453 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode     metrics system started
2013-05-23 16:27:44,768 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-05-23 16:27:44,914 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-05-23 16:27:45,212 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:     java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop
        at java.net.InetAddress.getLocalHost(InetAddress.java:1438)
        at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:271)
        at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:289)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:301)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)
Caused by: java.net.UnknownHostException: is-joshbloom-hadoop
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
        at java.net.InetAddress.getLocalHost(InetAddress.java:1434)
        ... 8 more

2013-05-23 16:27:45,228 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: is-joshbloom-hadoop: is-joshbloom-hadoop
************************************************************/

*UPDATE***

content of /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

解决方案

Amend your /etc/hosts to include a hostname loopback mapping:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.1.1   is-joshbloom-hadoop
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Your problem is your machine doesn't know how to resolve the hostname is-joshbloom-hadoop to a specific IP address. There are typically two places/methods resolution occurs - either via a DNS server or using a local hosts file (hosts file takes precedence).

The above amendment to your hosts file allows you machine to resolve the machine name is-joshbloom-hadoop to the IP address 127.0.1.1. The OS has an internal loopback address for the range 127.0.0.0/8, so you could name any address in here. On my Ubuntu laptop, it uses the 127.0.1.1 and i'm sure it changes between OS's, but my guess is by not using 127.0.0.1 you don't have to search for it in the localhost line if you change your machine name in future.

这篇关于Hadoop伪分布式模式 - Datanode和tasktracker不启动的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆