LeaseExpiredException:HDFS上没有租约错误 [英] LeaseExpiredException: No lease error on HDFS

查看:419
本文介绍了LeaseExpiredException:HDFS上没有租约错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正尝试将大量数据加载到HDFS,并且我有时会得到下面的错误。任何想法为什么?



错误:

  org.apache。 hadoop.ipc.RemoteException:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException:在/ data / work /上没有租约/ 20110926-134514 / _temporary / _attempt_201109110407_0167_r_000026_0 / hbase / site = 3815120 / day = 20110925 / 107-107- 3815120-20110926-134514-r-00026文件不存在。持有人DFSClient_attempt_201109110407_0167_r_000026_0没有任何打开的文件。 
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1557)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem。
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1603)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem。 completeFile(FSNamesystem.java:1591)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:675)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source )
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache。 hadoop.ipc.RPC $ Server.call(RPC.java:557)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1434)
at org.apache .hadoop.ipc.Server $ Handler $ 1.run(Server.java:1430)$ b $ java.util.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doA s(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server $ Handler.run( Server.java:1428)

在org.apache.hadoop.ipc.Client.call(Client.java:1107)
在org.apache.hadoop.ipc.RPC $ Invoker。在sun.reflect.NativeMethodAccessorImpl.invoke0(Native方法)
在sun.reflect.NativeMethodAccessorImpl.invoke( NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
在org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
在$ Proxy1.complete(未知源)
在org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.closeInternal(DFSClient.java:3566)
at org。 apache.hadoop.hdfs.DFSClient $ DFSOutputStream.close(DFSClient.java:3481)
at org.apache.hadoop.fs.FSDataOutputStream $ PositionCache.close(FSDataOutputStream.java:61)
at org。 apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
at org.apache.hadoop.io.SequenceFile $ Writer.close(SequenceFile.java:966)
at org.apache。 hadoop.io.SequenceFile $ BlockCompressWriter.close(SequenceFile.java:1297)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat $ 1.close(SequenceFileOutputFormat.java:78)
at org .apache.hadoop.mapreduce.lib.output.MultipleOutputs $ RecordWriterWithCounter.close(MultipleOutputs.java:303)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:456)
at com.my.hadoop.platform.sortmerger.MergeSortHBaseReducer.cleanup(MergeSortHBaseReducer.java:145)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:178)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:572)
在org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:414)
在org.apache.hadoop.mapred.Child $ 4.run(Child.java:270)
在java .security.AccessController.doPrivileged(Native方法)
位于javax.security.auth.Subject.doAs(Subject.java:396)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation。
at org.apache.hadoop.mapred.Child.main(Child.java:264)


<解决方案我设法解决了这个问题:

当作业结束时,他删除/ data / work /文件夹。如果少量作业并行运行,则删除操作也会删除另一个作业的文件。实际上我需要删除/ data / work /.



换句话说,当作业尝试访问不再存在的文件时抛出异常


I am trying to load large data to HDFS and I sometimes get the error below. any idea why?

The error:

org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /data/work/20110926-134514/_temporary/_attempt_201109110407_0167_r_000026_0/hbase/site=3815120/day=20110925/107-107-3815120-20110926-134514-r-00026 File does not exist. Holder DFSClient_attempt_201109110407_0167_r_000026_0 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1557)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1548)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1603)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1591)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:675)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
at $Proxy1.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3566)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3481)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:966)
at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.close(SequenceFile.java:1297)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.close(SequenceFileOutputFormat.java:78)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs$RecordWriterWithCounter.close(MultipleOutputs.java:303)
at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.close(MultipleOutputs.java:456)
at com.my.hadoop.platform.sortmerger.MergeSortHBaseReducer.cleanup(MergeSortHBaseReducer.java:145)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:178)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:572)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:414)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)

解决方案

I managed to fix the problem:

When the job ends he deletes /data/work/ folder. If few jobs are running in parallel the deletion will also delete the files of the another job. actually I need to delete /data/work/.

In other words this exception is thrown when the job try to access to files which are not existed anymore

这篇关于LeaseExpiredException:HDFS上没有租约错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆