“hadoop namenode -format”返回java.net.UnknownHostException [英] "hadoop namenode -format" returns a java.net.UnknownHostException

查看:294
本文介绍了“hadoop namenode -format”返回java.net.UnknownHostException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在学习hadoop,我正在尝试设置单一节点测试,如 http ://hadoop.apache.org/common/docs/current/single_node_setup.html



我已配置ssh(我可以记录没有密码) 。



我的服务器在我们的内部网上,在代理后面。



当我试图运行


bin / hadoop namenode -format


我得到以下java.net.UnknownHostException异常:

  $ bin / hadoop namenode -format 
11 / 10 15:36:47 INFO namenode.NameNode:STARTUP_MSG:
/ ******************************** ****************************
STARTUP_MSG:Starting NameNode
STARTUP_MSG:host = java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3:srv-clc-04.univ-nantes.prive3
STARTUP_MSG:args = [-format]
STARTUP_MSG:version = 0.20.203.0
STARTUP_MSG:build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333;由'oom'编译于5月4日星期三07:57:50 PDT 2011
****************************** ****************************** /
在/ home / lindenb / tmp / HADOOP / dfs中重新格式化文件系统/名称 ? (Y或N)Y
11/06/10 15:36:50 INFO util.GSet:VM type = 64-bit
11/06/10 15:36:50 INFO util.GSet: 2%max memory = 19.1675 MB
11/06/10 15:36:50 INFO util.GSet:capacity = 2 ^ 21 = 2097152 entries
11/06/10 15:36:50 INFO util .GSet:recommended = 2097152,actual = 2097152
11/06/10 15:36:50 INFO namenode.FSNamesystem:fsOwner = lindenb
11/06/10 15:36:50 INFO namenode.FSNamesystem :supergroup = supergroup
11/06/10 15:36:50 INFO namenode.FSNamesystem:isPermissionEnabled = true
11/06/10 15:36:50 INFO namenode.FSNamesystem:dfs.block.invalidate .limit = 100
11/06/10 15:36:50 INFO namenode.FSNamesystem:isAccessTokenEnabled = false accessKeyUpdateInterval = 0 min(s),accessTokenLifetime = 0 min(s)
11/06/10 15:36:50 INFO namenode.NameNode:缓存文件名发生超过10次
11/06/10 15:36:50 INFO common.Storage:大小为113的图像文件保存在0秒内。
11/06/10 15:36:50 INFO common.Storage:存储目录/ home / lindenb / tmp / HADOOP / dfs / name已成功格式化。
11/06/10 15:36:50 INFO namenode.NameNode:SHUTDOWN_MSG:
/ ************************ ************************************
SHUTDOWN_MSG:在java.net关闭NameNode。 UnknownHostException:srv-clc-04.univ-nantes.prive3:srv-clc-04.univ-nantes.prive3
******************* *************************************** /



之后,hadoop启动

 。 /bin/start-all.sh 

但有另一个新的异常,当我试图复制本地文件:

  bin / hadoop fs -copyFromLocal〜/ file.txt file.txt 

DataStreamer异常:org.apache.hadoop.ipc.RemoteException:java.io.IOException:File /user/lindenb/file.txt只能复制到0个节点,而不是org.apache.hadoop.hdfs中的1
。 server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)

如何解决这个问题?



感谢

解决方案

UnknownHostException 在hadoop尝试解析DNS名称时被抛出(srv-clc -04.univ-nantes.prive3)到IP地址。



在配置文件中查找域名,并将其替换为localhost。 (或更新DNS解析名称为IP地址)


I'm currently learning hadoop and I'm trying to setup a single node test as defined in http://hadoop.apache.org/common/docs/current/single_node_setup.html

I've configured ssh (I can log without a password).

My server is on our intranet, behind a proxy.

When I'm trying to run

bin/hadoop namenode -format

I get the following java.net.UnknownHostException exception:

$ bin/hadoop namenode -format
11/06/10 15:36:47 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
Re-format filesystem in /home/lindenb/tmp/HADOOP/dfs/name ? (Y or N) Y
11/06/10 15:36:50 INFO util.GSet: VM type       = 64-bit
11/06/10 15:36:50 INFO util.GSet: 2% max memory = 19.1675 MB
11/06/10 15:36:50 INFO util.GSet: capacity      = 2^21 = 2097152 entries
11/06/10 15:36:50 INFO util.GSet: recommended=2097152, actual=2097152
11/06/10 15:36:50 INFO namenode.FSNamesystem: fsOwner=lindenb
11/06/10 15:36:50 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/10 15:36:50 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/10 15:36:50 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/06/10 15:36:50 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/06/10 15:36:50 INFO namenode.NameNode: Caching file names occuring more than 10 times 
11/06/10 15:36:50 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/06/10 15:36:50 INFO common.Storage: Storage directory /home/lindenb/tmp/HADOOP/dfs/name has been successfully formatted.
11/06/10 15:36:50 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
************************************************************/

After that, hadoop was started

./bin/start-all.sh

but there was another new exception when I tried to copy a local file:

 bin/hadoop fs  -copyFromLocal ~/file.txt  file.txt

DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lindenb/file.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)

how can I fix this problem please ?

Thanks

解决方案

UnknownHostException is thrown when hadoop tries to resolve the DNS name (srv-clc-04.univ-nantes.prive3) to an ip address. This fails.

Look for the domain name in the configuration files and replace it by "localhost". (Or update the DNS up resolve the name to an ip address)

这篇关于“hadoop namenode -format”返回java.net.UnknownHostException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆