查找HDFS正在侦听的端口号 [英] Find port number where HDFS is listening
问题描述
我想通过完全限定的名称访问hdfs,例如:
hadoop fs -ls hdfs:// machine-name :8020 / user
我也可以通过 $ b简单地访问hdfs $ b
hadoop fs -ls / user
然而, ,我正在编写应该适用于不同发行版(HDP,Cloudera,MapR等)的测试用例,它涉及访问带有限定名称的hdfs文件。
我了解 hdfs:// machine-name:8020
在core-site.xml中定义为 fs.default.name
。但是这在不同的分布上似乎有所不同。例如,hdfs是MapR上的maprfs。 IBM BigInsights在 $ HADOOP_HOME / conf
中甚至没有 core-site.xml
。 hadoop似乎没有告诉我在 fs.default.name
中定义了哪些命令行选项。
如何从命令行可靠地获取 fs.default.name
中定义的值?
测试将始终在namenode上运行,因此计算机名称很简单。但获取端口号(8020)有点困难。我试过lsof,netstat ..但还是找不到可靠的方法。
Apache Hadoop 2.7.0以上版本中提供了以下命令,可用于获取hadoop配置属性的值。 fs.default.name在hadoop 2.0中被弃用,fs.defaultFS是更新后的值。不确定这是否会影响maprfs。
hdfs getconf -confKey fs.defaultFS#(新属性)
或
hdfs getconf -confKey fs.default .name#(旧属性)
不知道是否有任何命令行工具可用于检索配置属性Mapr或hadoop 0.20 hadoop版本中的值。在这种情况下,最好在Java中尝试相同的方式来检索与配置属性对应的值。
配置hadoop conf = Configuration.getConf();
System.out.println(conf.get(fs.default.name));
I want to access hdfs with fully qualified names such as :
hadoop fs -ls hdfs://machine-name:8020/user
I could also simply access hdfs with
hadoop fs -ls /user
However, I am writing test cases that should work on different distributions(HDP, Cloudera, MapR...etc) which involves accessing hdfs files with qualified names.
I understand that hdfs://machine-name:8020
is defined in core-site.xml as fs.default.name
. But this seems to be different on different distributions. For example, hdfs is maprfs on MapR. IBM BigInsights don't even have core-site.xml
in $HADOOP_HOME/conf
.
There doesn't seem to a way hadoop tells me what's defined in fs.default.name
with it's command line options.
How can I get the value defined in fs.default.name
reliably from command line?
The test will always be running on namenode, so machine name is easy. But getting the port number(8020) is a bit difficult. I tried lsof, netstat.. but still couldn't find a reliable way.
Below command available in Apache hadoop 2.7.0 onwards, this can be used for getting the values for the hadoop configuration properties. fs.default.name is deprecated in hadoop 2.0, fs.defaultFS is the updated value. Not sure whether this will work incase of maprfs.
hdfs getconf -confKey fs.defaultFS # ( new property )
or
hdfs getconf -confKey fs.default.name # ( old property )
Not sure whether there is any command line utilities available for retrieving configuration properties values in Mapr or hadoop 0.20 hadoop versions. In case of this situation you better try the same in Java for retrieving the value corresponding to a configuration property.
Configuration hadoop conf = Configuration.getConf();
System.out.println(conf.get("fs.default.name"));
这篇关于查找HDFS正在侦听的端口号的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!