Hive中的LeaseExpiredException [英] LeaseExpiredException in Hive

查看:124
本文介绍了Hive中的LeaseExpiredException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所有。我运行一个配置单元查询运行到97%,异常显示org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException:没有租约。



任何人都可以解释发生此错误的原因?
这是一个单用户Hive集群环境。



预先感谢您。

  2013年1月2日22:16:17833 ERROR org.apache.hadoop.hdfs.DFSClient:异常关闭文件/ tmp /蜂房的hadoop / hive_2013-01-01_21-21-32_067_6367259756570557828 / _task_tmp 。-EXT-10002 / _tmp.000004_1:org.apache.hadoop.ipc.RemoteException:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException:在/ tmp /蜂房的Hadoop没有租期/ hive_2013-01-01_21- 21-32_067_6367259756570557828 / _task_tmp.-ext-10002 / _tmp.000004_1文件不存在。持有者DFSClient_attempt_201301012114_0002_m_000004_1没有任何打开的文件。 
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem。 java:1622)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1677)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem。 completeFile(FSNamesystem.java:1665)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:718)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source )
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache。 hadoop.ipc.RPC $ Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1388)
at org.apache .hadoop.ipc.Server $ Handler $ 1.run(Server.java:1384)
at ja va.security.AccessController.doPrivileged(Native方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation .java:1121)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1382)

org.apache.hadoop.ipc.RemoteException:org。 apache.hadoop.hdfs.server.namenode.LeaseExpiredException:上/tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1文件没有租期不存在。持有者DFSClient_attempt_201301012114_0002_m_000004_1没有任何打开的文件。
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)
在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem。 java:1622)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1677)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem。 completeFile(FSNamesystem.java:1665)
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:718)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source )
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache。 hadoop.ipc.RPC $ Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1388)
at org.apache .hadoop.ipc.Server $ Handler $ 1.run(Server.java:1384)
at ja va.security.AccessController.doPrivileged(Native方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation .java:1121)
在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1382)

在org.apache.hadoop.ipc.Client.call (Client.java:1070)
at org.apache.hadoop.ipc.RPC $ Invoker.invoke(RPC.java:225)
at $ Proxy2.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocation Handler.invoke(RetryInvocationHandler.java:59)
at $ Proxy2.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.closeInternal(DFSClient.java:3897)
at org.apache.hadoop.hdfs.DFSClient $ DFSOutputStream.close(DFSClient.java:3812)
at org.apache.hadoop.hdfs.DFSClient $ LeaseChecker.close(DFSClient.java:1345)
在org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:275)
在org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:328)
在org .apache.hadoop.fs.FileSystem $ Cache.closeAll(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:277)
at org.apache .hadoop.fs.FileSystem $ ClientFinalizer.run(FileSystem.java:260)


解决方案

SET hive.exec.max.dynamic.partitions = 100000;
SET hive.exec.max.dynamic.partitions.pernode = 100000;

all. I run a hive query runs to 97% and exception shows that org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on sth.

Can anyone kindly explain the reason why this error occurred? And this is a single user Hive cluster environment.

Thank you in advance.

2013-01-02 22:16:17,833 ERROR org.apache.hadoop.hdfs.DFSClient: Exception closing file /tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1 : org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1 File does not exist. Holder DFSClient_attempt_201301012114_0002_m_000004_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1677)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1665)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:718)
        at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-01-01_21-21-32_067_6367259756570557828/_task_tmp.-ext-10002/_tmp.000004_1 File does not exist. Holder DFSClient_attempt_201301012114_0002_m_000004_1 does not have any open files.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1677)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1665)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:718)
        at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

        at org.apache.hadoop.ipc.Client.call(Client.java:1070)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
        at $Proxy2.complete(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy2.complete(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3897)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3812)
        at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1345)
        at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:275)
        at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:328)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1446)
        at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:277)
        at org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:260)

解决方案

SET hive.exec.max.dynamic.partitions=100000; SET hive.exec.max.dynamic.partitions.pernode=100000;

这篇关于Hive中的LeaseExpiredException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆