Hadoop:从节点没有启动 [英] Hadoop: Slave nodes are not starting

查看:551
本文介绍了Hadoop:从节点没有启动的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在我的机器上安装一个伪分布式Hadoop集群。
Env详细信息:
主机操作系统:Windows
客户操作系统:Ubuntu

I am trying to setup a Pseudo Distributed Hadoop Cluster on my machine. Env Details : Host OS: Windows Guest OS: Ubuntu


  • 一个奴隶。

  • 我能够在单节点群集上成功运行hadoop wordcount

  • 但是当我尝试添加slave,datanode,jobtracker,namenode并且辅助名称节点在主站中正常启动,但从站中没有数据节点启动。

    • 我可以使用来自主服务器的ssh ping从服务器并登录到从服务器。

    • / etc / host文件包含在这两个虚拟机的正确条目中

    • 我使用NAT和主机专用适配器为虚拟机获取静态IP

    • Vm's Created one master and one slave.
    • I was able to run the hadoop wordcount successfully on single node cluster
    • But when i tried to add the slave, the datanode, jobtracker, namenode and secondary namenode starts fine in the Master but no data node starts in the slave.
      • I am able to ping the slave and login to the slave using ssh from my master.
      • /etc/host file contains the correct entries in both vm's
      • I am using NAT and Host only Adapter to get a static ip for the VM's

      主节点= zenda1

      Master Node = zenda1

      从节点= Zenda

      Slave Node = Zenda

      core-site.xml

      core-site.xml

      <configuration>
      <property>
          <name>hadoop.tmp.dir</name>
          <value>/tmp</value>
      </property>
      <property>
           <name>fs.default.name</name>
           <value>hdfs://zenda1:9000</value>
      </property>
      

      mapred-site .xml

       <configuration>
             <property>
                     <name>mapred.job.tracker</name>
                     <value>zenda1:9001</value>
             </property>
       </configuration>
      

      hdfs-site.xml

        <configuration>
          <property>
            <name>dfs.replication</name>
            <value>2</value>
          </property>
        </configuration>
      

      主人

      Master

        zenda1
      

      奴隶

        zenda1
        Zenda
      

      hadoop文件夹位于我的主节点和从属节点中的不同位置(文件夹位置)。

      The hadoop folder is located at diff locations( folders location) in my Master and Slave nodes.

      推荐答案

      我发现解决方案:奴隶机器中的数据节点dint开始,因为hadoop home在我的主人的位置奴隶是不同的。当我将从节点的hadoop home复制到桌面(我的主节点的hadoop home所在的地方)。它开始正常工作。

      I have found the solution: The data nodes in the slave machines dint start because the location of hadoop home in my master and slaves were different. When I copied the hadoop home of slave node into the desktop ( thats where the hadoop home of my master node resides). It started working fine.

      这篇关于Hadoop:从节点没有启动的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆