NameNode地址的URI无效 [英] Invalid URI for NameNode address
问题描述
我试图建立一个 Cloudera
Hadoop集群,主节点包含 namenode
, secondarynamenode
和 jobtracker
,另外两个节点包含 datanode
和的TaskTracker
。 Cloudera
版本是4.6,操作系统是ubuntu精确的x64。此外,该集群正在从AWS实例创建。 ssh passwordless
也已设置, Java
安装Oracle-7。
每当我执行 sudo服务hadoop-hdfs-namenode start
我得到:
2014-05-14 05:08:38,023 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode:namenode中的异常连接
java.lang.IllegalArgumentException:NameNode地址的URI无效(检查fs.defaultFS):file:///没有权限。
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:329)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode。 java:317)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:370)
at org.apache.hadoop.hdfs.server.namenode.NameNode。 loginAsNameNodeUser(NameNode.java:422)
位于org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:442)
位于org.apache.hadoop.hdfs.server。 namenode.NameNode。< init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode。< init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
我的 core-site.xml
:
<?xml version =1.0?>
<?xml-stylesheet type =text / xslhref =configuration.xsl?>
<! - 将特定于站点的属性覆盖到此文件中。 - >
<配置>
<属性>
<名称> fs.defaultFS< / name>
<值> hdfs://< master-ip>:8020< /值>
< / property>
< / configuration>
mapred-site.xml
:
<?xml version =1.0?>
<?xml-stylesheet type =text / xslhref =configuration.xsl?>
<! - 将特定于站点的属性覆盖到此文件中。 - >
<配置>
<属性>
<名称> mapred.job.tracker< / name>
< value> hdfs://< master-ip>:8021< / value>
< / property>
< / configuration>
hdfs-site.xml
:
<?xml-stylesheet type =text / xslhref =configuration.xsl?>
<! - 将特定于站点的属性覆盖到此文件中。 - >
<配置>
<属性>
< name> dfs.replication< / name>
<值> 2< /值>
< / property>
<属性>
<名称> dfs.permissions< /名称>
<值> false< /值>
< / property>
< / configuration>
我试过使用 public ip
, private-ip
, public dns
和 fqdn
,但结果是一样。
目录 /etc/hadoop/conf.empty
看起来像:
-rw-r - r-- 1 root root 2998 Feb 26 10:21 capacity-scheduler.xml
-rw -r - r-- 1 root hadoop 1335 Feb 26 10:21 configuration .xsl
-rw -r - r-- 1 root root 233 Feb 26 10:21 container-executor.cfg
-rwxr-xr-x 1 root root 287 May 14 05:09 core- site.xml
-rwxr-xr-x 1 root root 2445 May 14 05:09 hadoop-env.sh
-rw -r - r-- 1 root hadoop 1774 Feb 26 10:21 hadoop -metrics2.properties
-rw -r - r-- 1 root hadoop 2490 Feb 26 10:21 hadoop-metrics.properties
-rw -r - r-- 1 root hadoop 9196 Feb 26 10:21 hadoop-policy.xml
-rwxr-xr-x 1 root root 332 5月14日05:09 hdfs-site.xml
-rw -r - r-- 1 root hadoop 8735 Feb 26 10:21 log4j.properties
-rw -r - r-- 1 root root 4113 Feb 26 10:21 mapred-queues.xml.template
-rwxr-xr-x 1 root root 290 May 14 05:09 mapred-site.xml
-rw -r - r-- 1 root root 178 Feb 26 10:21 mapred-site.xml.template
-rwxr-xr-x 1根ro ot 12 May 14 05:09 masters
-rwxr-xr-x 1 root root 29 May 14 05:09 slaves
-rw-r - r-- 1 root hadoop 2316 Feb 26 10:21 ssl-client.xml.example
-rw -r - r-- 1 root hadoop 2251 Feb 26 10:21 ssl-server.xml.example
-rw-r - r-- 1 root root 2513 Feb 26 10:21 yarn -env.sh
-rw -r - r-- 1 root root 2262 Feb 26 10:21 yarn-site.xml
和奴隶
列出 ip地址
两台子机:
< slave1-ip>
< slave2-ip>
执行
update-alternatives --get-selections | grep hadoop
hadoop-conf auto /etc/hadoop/conf.empty
我已经做了很多搜索,但没有得到任何可以帮助我解决问题的东西。有人可以提供任何线索怎么回事?
我遇到了同样的问题。我发现我必须添加一个fs.defaultFS属性到hdfs-site.xml以匹配core-site.xml中的fs.defaultFS属性:
<性>
<名称> fs.defaultFS< / name>
<值> hdfs://< master-ip>:8020< /值>
< / property>
一旦我添加了这个,辅助名称节点就开始了。
I'm trying to set up a Cloudera
Hadoop cluster, with a master node containing the namenode
, secondarynamenode
and jobtracker
, and two others nodes containing the datanode
and tasktracker
. The Cloudera
version is 4.6, the OS is ubuntu precise x64. Also, this cluster is being created from a AWS instance. ssh passwordless
has been set as well, Java
instalation Oracle-7.
Whenever I execute sudo service hadoop-hdfs-namenode start
I get:
2014-05-14 05:08:38,023 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:329)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:317)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:370)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:422)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:442)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
My core-site.xml
:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://<master-ip>:8020</value>
</property>
</configuration>
mapred-site.xml
:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://<master-ip>:8021</value>
</property>
</configuration>
hdfs-site.xml
:
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
I tried using public ip
, private-ip
, public dns
and fqdn
, but the result is the same.
The directory /etc/hadoop/conf.empty
looks like:
-rw-r--r-- 1 root root 2998 Feb 26 10:21 capacity-scheduler.xml
-rw-r--r-- 1 root hadoop 1335 Feb 26 10:21 configuration.xsl
-rw-r--r-- 1 root root 233 Feb 26 10:21 container-executor.cfg
-rwxr-xr-x 1 root root 287 May 14 05:09 core-site.xml
-rwxr-xr-x 1 root root 2445 May 14 05:09 hadoop-env.sh
-rw-r--r-- 1 root hadoop 1774 Feb 26 10:21 hadoop-metrics2.properties
-rw-r--r-- 1 root hadoop 2490 Feb 26 10:21 hadoop-metrics.properties
-rw-r--r-- 1 root hadoop 9196 Feb 26 10:21 hadoop-policy.xml
-rwxr-xr-x 1 root root 332 May 14 05:09 hdfs-site.xml
-rw-r--r-- 1 root hadoop 8735 Feb 26 10:21 log4j.properties
-rw-r--r-- 1 root root 4113 Feb 26 10:21 mapred-queues.xml.template
-rwxr-xr-x 1 root root 290 May 14 05:09 mapred-site.xml
-rw-r--r-- 1 root root 178 Feb 26 10:21 mapred-site.xml.template
-rwxr-xr-x 1 root root 12 May 14 05:09 masters
-rwxr-xr-x 1 root root 29 May 14 05:09 slaves
-rw-r--r-- 1 root hadoop 2316 Feb 26 10:21 ssl-client.xml.example
-rw-r--r-- 1 root hadoop 2251 Feb 26 10:21 ssl-server.xml.example
-rw-r--r-- 1 root root 2513 Feb 26 10:21 yarn-env.sh
-rw-r--r-- 1 root root 2262 Feb 26 10:21 yarn-site.xml
and slaves
lists the ip addresses
of the two slave machines:
<slave1-ip>
<slave2-ip>
Executing
update-alternatives --get-selections | grep hadoop
hadoop-conf auto /etc/hadoop/conf.empty
I've done a lot of search, but didn't get anything that could help me fix my problem. Could someone offer any clue what's going on?
I ran into this same thing. I found I had to add a fs.defaultFS property to hdfs-site.xml to match the fs.defaultFS property in core-site.xml:
<property>
<name>fs.defaultFS</name>
<value>hdfs://<master-ip>:8020</value>
</property>
Once I added this, the secondary namenode started OK.
这篇关于NameNode地址的URI无效的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!