获取java.lang.OutOfMemoryError:在提交Map Reduce时超过了GC开销限制 [英] Getting java.lang.OutOfMemoryError: GC overhead limit exceeded While Submitting Map Reduce

查看:178
本文介绍了获取java.lang.OutOfMemoryError:在提交Map Reduce时超过了GC开销限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在提交地图缩小时获取以下信息。我使用-XX:MaxPermSize = 128m内存大小开始了我的地图缩减计划。



有谁知道现在正在发生什么 -

  17/03/24 09:58:46 INFO hdfs.DFSClient:创建HDFS_DELEGATION_TOKEN令牌1160328用于ha-hdfs上的svc_pffr:nameservice3 
17/03 / 24 09:58:46错误hdfs.KeyProviderCache:找不到密钥[dfs.encryption.key.provider.uri]的uri来创建keyProvider!
17/03/24 09:58:46 INFO security.TokenCache:Got dt for hdfs:// nameservice3;种类:HDFS_DELEGATION_TOKEN,服务:HA-HDFS:nameservice3,订货号:(HDFS_DELEGATION_TOKEN令牌1160328为svc_pffr)
17/03/24 9点58分46秒错误hdfs.KeyProviderCache:无法与键[dfs.encryption发现URI .key.provider.uri]创建一个keyProvider!
17/03/24 09:58:46 WARR mapreduce.JobSubmitter:未执行Hadoop命令行选项解析。实施工具界面并使用ToolRunner执行您的应用程序以解决此问题。
17/03/24 09:58:47错误hdfs.KeyProviderCache:找不到关键字[dfs.encryption.key.provider.uri]的uri来创建keyProvider!
17/03/24 10:01:55信息mapreduce.JobSubmitter:清理临时区域/user/svc_pffr/.staging/job_1489708003568_5870
线程main中的异常java.lang.OutOfMemoryError:GC开销限制在org.apache.hadoop.security.token.Token上超过
。< init>(Token.java:85)
at org.apache.hadoop.hdfs.protocol.LocatedBlock。< init>(LocatedBlock.java:52)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:755)
at org.apache.hadoop.hdfs.protocolPB.PBHelper .convertLocatedBlock(PBHelper.java:1174)
在org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1192)
在org.apache.hadoop.hdfs.protocolPB.PBHelper .convert(PBHelper.java:1328)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1436)
at org.apache.hadoop.hdfs.protocolPB.PBHelper .convert(PBHelper.java:1445)
在org.apache.ha doop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:549)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43 )
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
在org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
在com.sun.proxy。$ Proxy23.getListing(Unknown Source)
在org.apache。 hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1893)
位于org.apache.hadoop.hdfs.DistributedFileSystem $ 15。< init>(DistributedFileSystem.java:742)
位于org.apache。 hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:731)
在org.a pache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1664)
在org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:300)
。在组织。 apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
在org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
。在org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
在org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:589)
。在org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:606)
在org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:490)
。在组织。 apache.hadoop.mapreduce.Job $ 10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1292)
在java.security.AccessController.doPrivileged(本地方法)
在javax.security.auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs( UserGroupInformation.java:1642)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)


解决方案

我有这个异常,我只是格式化了我的hdfs,因为它已经饱和了!

  $ hadoop namenode -format 

请注意:如果您格式化hdfs你将失去所有与数据节点有关的元数据所以datanode上的所有信息都将丢失!

Getting below message while submitting map reduce. I started my map reduce program with -XX:MaxPermSize=128m memory size.

Do anyone has the clue whats going on right now -

17/03/24 09:58:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 1160328 for svc_pffr on ha-hdfs:nameservice3
    17/03/24 09:58:46 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
    17/03/24 09:58:46 INFO security.TokenCache: Got dt for hdfs://nameservice3; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice3, Ident: (HDFS_DELEGATION_TOKEN token 1160328 for svc_pffr)
    17/03/24 09:58:46 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
    17/03/24 09:58:46 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
    17/03/24 09:58:47 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
    17/03/24 10:01:55 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/svc_pffr/.staging/job_1489708003568_5870
    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
            at org.apache.hadoop.security.token.Token.<init>(Token.java:85)
            at org.apache.hadoop.hdfs.protocol.LocatedBlock.<init>(LocatedBlock.java:52)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:755)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convertLocatedBlock(PBHelper.java:1174)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1192)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1328)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1436)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1445)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:549)
            at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
            at com.sun.proxy.$Proxy23.getListing(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1893)
            at org.apache.hadoop.hdfs.DistributedFileSystem$15.<init>(DistributedFileSystem.java:742)
            at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:731)
            at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1664)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:300)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
            at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:589)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:606)
            at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:490)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
            at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)

解决方案

I had this exception, I just formatted my hdfs because it was saturated!

$ hadoop namenode -format

Pay attention please: If you format your hdfs you will lose all the meta-data related to data-nodes So all the information on the datanodes will be lost!

这篇关于获取java.lang.OutOfMemoryError:在提交Map Reduce时超过了GC开销限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆