Hadoop - 重新启动datanode和tasktracker [英] Hadoop - Restart datanode and tasktracker

查看:99
本文介绍了Hadoop - 重新启动datanode和tasktracker的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想关闭一个datanode和tasktracker,以便我在mapred-site.xml中做出的一些新更改生效,例如mapred.reduce.child.java.opts等。我如何去做?然而,我不想让整个集群失效,因为我有活动的作业正在运行。

另外,如何确保namenode不会复制相关内容暂时关闭datanode的数据块到另一个节点上

解决方案

停止



您可以从NameNode的hadoop bin目录停止DataNode和TaskTracker。

  ./ hadoop-daemon.sh stop tasktracker 
./hadoop-daemon.sh stop datanode

所以此脚本将检查hadoop的 conf 目录中的奴隶文件,以停止DataNode,并且与TaskTracker相同。



开始



此脚本再次检查hadoop的 conf 目录中的奴隶文件启动DataNode和TaskTracker。

  ./ hadoop-daemon.sh start tasktracker 
./hado op-daemon.sh启动datanode


I want to bring down a single datanode and tasktracker, so that some new changes that i've made in my mapred-site.xml take effect, such as mapred.reduce.child.java.opts etc. How do I do that? However I don't want to bring down the whole cluster since i have active jobs running.

Also, how can that be done ensuring that the namenode does not copy the relevant data blocks of a "temporarily down" datanode onto another node

解决方案

To stop

You can stop the DataNodes and TaskTrackers from NameNode's hadoop bin directory.

./hadoop-daemon.sh stop tasktracker 
./hadoop-daemon.sh stop datanode

So this script checks for slaves file in conf directory of hadoop to stop the DataNodes and same with the TaskTracker.

To start

Again this script checks for slaves file in conf directory of hadoop to start the DataNodes and TaskTrackers.

./hadoop-daemon.sh start tasktracker
./hadoop-daemon.sh start datanode

这篇关于Hadoop - 重新启动datanode和tasktracker的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆