文件jobtracker.info只能复制到0个节点,而不是1个 [英] File jobtracker.info could only be replicated to 0 nodes, instead of 1

查看:71
本文介绍了文件jobtracker.info只能复制到0个节点,而不是1个的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在Fedora 17上设置Hadoop集群. 当我给出/bin/star-all.sh命令时,守护进程正在主节点和从节点上启动. 但是,当我在主节点上查看数据节点的日志文件时,就会得到EROOR

I am trying to setup Hadoop cluster on Fedora 17. When I give /bin/star-all.sh command daemons are getting started on masters and slaves nodes. But when I view log file for data node on master node I get following EROOR

错误org.apache.hadoop.security.UserGroupInformation:PriviledgedActionException as:hadoop1原因:java.io.IOException:文件/home/hadoop1/mapred/system/jobtracker.info只能复制到0个节点,而不是1

2013-03-23 15:37:08,205信息org.apache.hadoop.ipc.Server:IPC服务器处理程序5在9100上,调用addBlock(/home/hadoop1/mapred/system/jobtracker.info,DFSClient_-838454688 ,null)从127.0.0.1:40173开始:错误:java.io.IOException:文件/home/hadoop1/mapred/system/jobtracker.info只能复制到0个节点,而不是1个 java.io.IOException:文件/home/hadoop1/mapred/system/jobtracker.info只能复制到0个节点,而不是1个 在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) 在org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) 在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处 在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 在java.lang.reflect.Method.invoke(Method.java:601) 在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:563) 在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1388) 在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1384) 在java.security.AccessController.doPrivileged(本机方法) 在javax.security.auth.Subject.doAs(Subject.java:415) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) 在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1382)

2013-03-23 15:37:08,205 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9100, call addBlock(/home/hadoop1/mapred/system/jobtracker.info, DFSClient_-838454688, null) from 127.0.0.1:40173: error: java.io.IOException: File /home/hadoop1/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /home/hadoop1/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

我也在尝试运行wordcound程序.使用命令将数据复制到HDFS时

I am also trying to run wordcound program. While copying data into HDFS using command

$ bin/hadoop dfs -copyFromLocal/home/hadoop1/Documents/wordcount//home/hadoop1/hdfs/data 我收到以下错误

WARN hdfs.DFSClient:DataStreamer异常:org.apache.hadoop.ipc.RemoteException:java.io.IOException:文件/home/hadoop1/hdfs/data/wordcount/pg20417.txt仅可复制到0个节点,而不是1 在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) 在org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) 在sun.reflect.GeneratedMethodAccessor5.invoke(未知来源) 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 在java.lang.reflect.Method.invoke(Method.java:601) 在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:563) 在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1388) 在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1384) 在java.security.AccessController.doPrivileged(本机方法) 在javax.security.auth.Subject.doAs(Subject.java:415) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) 在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1382)

WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /home/hadoop1/hdfs/data/wordcount/pg20417.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)

13/03/23 15:41:05 WARN hdfs.DFSClient:错误恢复块null坏datanode [0]节点== null 13/03/23 15:41:05警告hdfs.DFSClient:无法获取块位置.源文件"/home/hadoop1/hdfs/data/wordcount/pg20417.txt"-正在中止... copyFromLocal:java.io.IOException:文件/home/hadoop1/hdfs/data/wordcount/pg20417.txt只能复制到0个节点,而不是1个 13/03/23 15:41:05错误hdfs.DFSClient:异常关闭文件/home/hadoop1/hdfs/data/wordcount/pg20417.txt:org.apache.hadoop.ipc.RemoteException:java.io.IOException:File /home/hadoop1/hdfs/data/wordcount/pg20417.txt只能复制到0个节点,而不是1个 在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) 在org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) 在sun.reflect.GeneratedMethodAccessor5.invoke(未知来源) 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 在java.lang.reflect.Method.invoke(Method.java:601) 在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:563) 在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1388) 在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1384) 在java.security.AccessController.doPrivileged(本机方法) 在javax.security.auth.Subject.doAs(Subject.java:415) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) 在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1382)

13/03/23 15:41:05 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null 13/03/23 15:41:05 WARN hdfs.DFSClient: Could not get block locations. Source file "/home/hadoop1/hdfs/data/wordcount/pg20417.txt" - Aborting... copyFromLocal: java.io.IOException: File /home/hadoop1/hdfs/data/wordcount/pg20417.txt could only be replicated to 0 nodes, instead of 1 13/03/23 15:41:05 ERROR hdfs.DFSClient: Exception closing file /home/hadoop1/hdfs/data/wordcount/pg20417.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /home/hadoop1/hdfs/data/wordcount/pg20417.txt could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

在这方面的帮助,我们深表感谢.

Help in this regard is appreciated..

推荐答案

我设法解决了这个问题...

I managed to solve this issue...

步骤I)主节点和从节点上的防火墙处于活动状态. 我通过以下命令"systemctl disable iptables.service"将其禁用

Step I) There was firewall active on master and slaves node machines.. I disabled it by following command "systemctl disable iptables.service"

步骤II)我在从站的core-sites.xml配置文件中错误地将"hdfs://localhost:9100"分配给了"fs.default.name".我将其更改为"hdfs://master:9100"

Step II) I wrongly assigned "hdfs://localhost:9100" to "fs.default.name" in slave's core-sites.xml configuration file. I changed it to "hdfs://master:9100"

现在我的Hadoop群集已启动..

Now my Hadoop Cluster is Up..

谢谢克里斯,谢谢您的帮助...

Thank You Chris for your kind help...

这篇关于文件jobtracker.info只能复制到0个节点,而不是1个的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆