从命令行获取纱线配置 [英] Get a yarn configuration from commandline
问题描述
yarn
命令获得配置的特定值? 例如,我想要做这样的事情
yarn get-config yarn.scheduler.maximum-allocation -mb
这有点不直观,但它结果是 hdfs getconf
命令能够检查YARN和MapReduce的配置属性,而不仅仅是HDFS。
> hdfs getconf -confKey fs.defaultFS
hdfs:// localhost:19000
> hdfs getconf -confKey dfs.namenode.name.dir
file:/// Users / chris / hadoop-deploy-trunk / data / dfs / name
> hdfs getconf -confKey yarn.resourcemanager.address
0.0.0.0:8032
> hdfs getconf -confKey mapreduce.framework.name
yarn
使用它的好处是您将看到Hadoop实际使用的任何配置属性的实际最终结果。这将解释一些更高级的配置模式,例如在XML文件中使用XInclude或者使用属性替换,如下所示:
< property>
< description> RM中应用程序管理器接口的地址。< / description>
< name> yarn.resourcemanager.address< / name>
< value> $ {yarn.resourcemanager.hostname}:8032< /值>
< / property>
任何试图直接解析XML文件的脚本方法都不可能像实现过程那样精确匹配实现在Hadoop中,所以最好问问Hadoop本身。
您可能想知道为什么 hdfs
命令可以获得配置YARN和MapReduce的属性。伟大的问题!这需要将MapReduce的 JobConf
实例注入到通过反射创建的某些对象中,这有点巧合。相关的代码在这里可见:
这段代码是作为运行 hdfs getconf
命令的一部分执行的。通过触发对 JobConf
的引用,它会强制相关MapReduce和YARN类的类加载和静态初始化,这些类添加yarn-default.xml,yarn-site.xml,mapred- default.xml和mapred-site.xml生成一组有效的配置文件。
由于这是实现的巧合,因此这种行为可能会发生变化在未来的版本中,但它会是一个向后不兼容的更改,所以我们绝对不会在当前的Hadoop 2.x版本中更改该行为。 Apache Hadoop兼容性政策承诺向后兼容主要版本行,所以你可以相信这至少可以在2.x版本的系列中继续工作。
In EMR, is there a way to get a specific value of the configuration given the configuration key using the yarn
command?
For example I would like to do something like this
yarn get-config yarn.scheduler.maximum-allocation-mb
It's a bit non-intuitive, but it turns out the hdfs getconf
command is capable of checking configuration properties for YARN and MapReduce, not only HDFS.
> hdfs getconf -confKey fs.defaultFS
hdfs://localhost:19000
> hdfs getconf -confKey dfs.namenode.name.dir
file:///Users/chris/hadoop-deploy-trunk/data/dfs/name
> hdfs getconf -confKey yarn.resourcemanager.address
0.0.0.0:8032
> hdfs getconf -confKey mapreduce.framework.name
yarn
A benefit of using this is that you'll see the actual, final results of any configuration properties as they are actually used by Hadoop. This would account for some of the more advanced configuration patterns, such as use of XInclude in the XML files or property substitutions, like this:
<property>
<description>The address of the applications manager interface in the RM.</description>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
Any scripting approach that tries to parse the XML files directly is unlikely to accurately match the implementation as its done inside Hadoop, so it's better to ask Hadoop itself.
You might be wondering why an hdfs
command can get configuration properties for YARN and MapReduce. Great question! It's somewhat of a coincidence of the implementation needing to inject an instance of MapReduce's JobConf
into some objects created via reflection. The relevant code is visible here:
This code is executed as part of running the hdfs getconf
command. By triggering a reference to JobConf
, it forces class loading and static initialization of the relevant MapReduce and YARN classes that add yarn-default.xml, yarn-site.xml, mapred-default.xml and mapred-site.xml to the set of configuration files in effect.
Since it's a coincidence of the implementation, it's possible that some of this behavior will change in future versions, but it would be a backwards-incompatible change, so we definitely wouldn't change that behavior inside the current Hadoop 2.x line. The Apache Hadoop Compatibility policy commits to backwards-compatibility within a major version line, so you can trust that this will continue working at least within the 2.x version line.
这篇关于从命令行获取纱线配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!