Hadoop.如何避免由于Docker自动名称而导致的worker文件 [英] Hadoop. How to avoid workers file due to Docker automatic names

查看:40
本文介绍了Hadoop.如何避免由于Docker自动名称而导致的worker文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

诸如Hadoop之类的某些工具需要明确指定工作人员的名称(

Some tools like Hadoop need to explicitly especify the name of workers (section Slaves File in docs), but when deploys with Docker Swarm it assigns automatic container names, so workers file doesn't work anymore as the names in it don't exist. Is there any way to avoid this file or, at least, assign aliases for containers (independently of container name) to make it work?

也许我不能使用 docker-compose.yml 文件,我必须在集群上手动创建服务...对此主题的任何看法都将不胜感激

Maybe I can't use docker-compose.yml file and I must create the services manually over the cluster... Any kind of light on the subject would be really appreciated

推荐答案

Hadoop文档很糟糕……显然,如果您在 core-site.xml 文件中设置了主节点的别名,可以省略 workers 文件.这些是我遵循的步骤:

Well, Hadoop documentation sucks... Apparently if you set the alias of master node in the core-site.xml file you can omit the workers file. These are the step I followed:

  1. 自定义了 core-site.xml 文件(在我的 docker-compose.yml 文件中,我将主服务放置为 nodemaster ).此文件必须位于主节点和工作节点中:
  1. Customized the core-site.xml file (in my docker-compose.yml file I put my master service with the name nodemaster). This file must be in master and workers nodes:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://nodemaster:9000</value>
        </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://nodemaster:9000</value>
        </property>
    </configuration>
</configuration>

  1. 现在运行时:

start-dfs.sh
start-yarn.sh

我将自动连接到主机

这篇关于Hadoop.如何避免由于Docker自动名称而导致的worker文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆