“Java堆空间内存不足错误”同时运行mapreduce程序 [英] "Java Heap space Out Of Memory Error" while running a mapreduce program

查看:141
本文介绍了“Java堆空间内存不足错误”同时运行mapreduce程序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在运行mapreduce程序时,我面临Out Of Memory错误。如果我在一个文件夹中保留260个文件并将其作为mapreduce程序的输入,它将显示Java Heap Space Out of Memory错误。如果我只给出100文件作为输入mapreduce,它运行良好。那么我怎样才能限制mapreduce程序一次只取得100个文件(〜50MB)。任何人都可以请在这个问题上提出建议......

文件数量:318,块数:1(块大小:128MB),Hadoop在32位系统上运行

 我的StackTrace:
==============
15 / 05/05 11:52:47 INFO input.FileInputFormat:要输入的总输入路径:318
15/05/05 11:52:47 INFO input.CombineFileInputFormat:DEBUG:终止节点分配:CompletedNodes:1,剩余大小:52027734
15/05/05 11:52:47信息mapreduce.JobSubmitter:拆分数量:1
15/05/05 11:52:47信息mapreduce.JobSubmitter:正在提交令牌工作:job_local634564612_0001
15/05/05 11时52分四十七秒WARN conf.Configuration:文件:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:试图重写最终参数:mapreduce.job.end-notification.max.retry.interval;忽略。
15/05/05 11:52:47 WARN conf.Configuration:file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:尝试覆盖最终参数: mapreduce.job。
end-notification.max.attempts;忽略。
15/05/05十一点52分48秒WARN conf.Configuration:文件:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:试图重写最后一个参数:mapreduce.job.end -notification.max.retry.interval;忽略。
15/05/05十一点52分48秒WARN conf.Configuration:文件:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:试图重写最后一个参数:mapreduce.job.end -notification.max.attempts;忽略。
15/05/05 11:52:48信息mapreduce.Job:跟踪作业的URL:http:// localhost:8080 /
15/05/05 11:52:48信息mapreduce .Job:正在运行的作业:job_local634564612_0001
15/05/05 11:52:48信息mapred.LocalJobRunner:OutputCommitter在config中设置null
15/05/05 11:52:48信息mapred.LocalJobRunner: OutputCommitter是org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/05/05 11:52:48信息mapred.LocalJobRunner:等待地图任务
15/05/05 11:52 :48 INFO mapred.LocalJobRunner:启动任务:attempt_local634564612_0001_m_000000_0
15/05/05 11:52:48信息mapred.Task:使用ResourceCalculatorProcessTree:[]
15/05/05 11:52:48信息mapred.MapTask:处理拆分:路径:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0 + 78550,/ user / usr / local / upload / 20120713T07-45- 43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0 + 32768,/用户的/ usr /本地/上传/ 20120713T07-45-43.718556000Z_85.26.235.102- 25300.c2s_ndttrace:0 + 10130,/ user / usr / local / upload
.....
.....
.....
/ 20120713T08- 33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0 + 19148,/用户的/ usr /本地/上传/ 20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0 + 63014,/用户的/ usr /本地/上传/ 20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0 + 13658,/用户的/ usr /本地/上传/ 20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0 + 126542 ,/用户的/ usr /本地/上传/ 20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0 + 92676,/用户的/ usr /本地/上传/ 20120713T08-34-08.378534000Z_87.7.113.115-62238 .c2s_ndttrace:0 + 49410,/ user / usr / local / upload / 20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0 + 2666092
15/05/05 11:52:49信息mapreduce .Job:Job job_local634564612_0001以超级模式运行:false
15/05/05 11:52:49信息mapreduce.Job:map 0%reduce 0%
15/05/05 11:52:50信息mapred.MapTask:映射输出收集器类= org.ap ache.hadoop.mapred.MapTask $ MapOutputBuffer
15/05/05 11:52:53信息mapred.MapTask :( EQUATOR)0 kvi 78643196(314572784)
15/05/05 11:52: 53信息mapred.MapTask:mapreduce.task.io.sort.mb:300
15/05/05 11:52:53信息mapred.MapTask:软限制在251658240
15/05/05 11 :52:53 INFO mapred.MapTask:bufstart = 0; bufvoid = 314572800
15/05/05 11:52:53信息mapred.MapTask:kvstart = 78643196;长度= 19660800
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:Payload start(82)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:Payload start(82)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:54 WARN pcap.PcapReader:有效负载开始(74)大于数据包数据(68)。返回空有效负载。
15/05/05 11:52:55信息mapred.MapTask:开始刷新地图输出
15/05/05 11:52:55信息mapred.MapTask:溢出地图输出
15/05/05 11:52:55信息mapred.MapTask:bufstart = 0; bufend = 105296; bufvoid = 314572800
15/05/05 11:52:55信息mapred.MapTask:kvstart = 78643196(314572784); kvend = 78637988(314551952);长度= 5209/19660800
15/05/05 11:52:55信息mapred.LocalJobRunner:map> map
15/05/05 11:52:55信息mapred.MapTask:完成的泄漏0
15/05/05 11:52:55信息mapred.LocalJobRunner:map task executor complete。
15/05/05 11:52:55 WARR mapred.LocalJobRunner:job_local634564612_0001
java.lang.Exception:java.lang.OutOfMemoryError:Java堆空间
,位于org.apache.hadoop。 mapred.LocalJobRunner $ Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner $ Job.run(LocalJobRunner.java:522)
引起:java.lang。 OutOfMemoryError:Java堆空间
at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208)
at net.ripe.hadoop.pcap.PcapReader.access $ 0(PcapReader.java:在net.ripe.hadoop.pcap.PcapReader $ PacketIterator.fetchNext(PcapReader.java:554)
at net.ripe.hadoop.pcap.PcapReader $ PacketIterator.hasNext(PcapReader.java:在net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57 559)

在net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue( CombineBinaryRecordReader.java:42)org.apache.hadoo
p.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)维持在组织org.apache.hadoop.mapred.MapTask $ NewTrackingRecordReader.nextKeyValue(MapTask.java:533)

。 apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
在org.apache.hadoop.mapreduce.lib.map.WrappedMapper $ Context.nextKeyValue(WrappedMapper.java:91)
在org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org。 apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner $ Job $ MapTaskRunnable.run(LocalJobRunner.java:243)
at java。 util.concurrent.Executors $ RunnableAdapter.call(Executors.java:511)$ b $ java.util.concurrent.FutureTask.run(FutureTask.java:266)
在java.util.concurrent.ThreadPoolExecutor。 runWorker(ThreadPoolExecutor.java:1142)
在java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:617)$ b $在java.lang.Thread.run(Thread.java:745)
15/05/05 11:52 :56信息mapreduce.Job:Job job_local634564612_0001失败,状态为FAILED,原因是:NA
15/05/05 11:52:56信息mapreduce.Job:计数器:25
文件系统计数器
FILE:读取的字节数= 29002348
FILE:写入的字节数= 29450636
FILE:读取操作数量= 0
FILE:大量读取操作的数量= 0
FILE :写入操作次数= 0
HDFS:读取的字节数= 103142
HDFS:写入的字节数= 0
HDFS:读取操作次数= 6
HDFS:Number大的读操作= 0
HDFS:写操作的数量= 1
Map-Reduce Framework
地图输入记录= 1303
地图输出记录= 1303
地图输出字节数= 105296
地图输出物化字节数= 0
输入分割字节数= 38078
合并输入记录= 0
溢出记录= 0
失败混排= 0
合并映射输出= 0
GC时间流逝(ms)= 593
CPU花费的时间(ms)= 0
物理内存(字节)快照= 0
虚拟内存(字节)快照= 0
总承诺堆使用率(字节)= 1745092608
文件输入格式计数器
字节读= 0

解决方案

第1步:

这一行放在你的hadoop主目录中: .bashrc export JVM_ARGS = - Xms1024m -Xmx1024m

这会将java堆内存更改为1024.默认值为128如果你正在运行一个终端hadoop工作,那么把它当做ha doop用户:

  source〜/ .bashrc 

如果您仍然收到错误消息,请尝试第2步。

第2步:



hadoop-env.sh 文件中添加以下行:

 export HADOOP_CLIENT_OPTS = -  Xmx1024m $ HADOOP_CLIENT_OPTS

步骤3:



添加这个属性位于 mapred-site.xml 文件中:

 < property> ; 
<名称> mapred.child.java.opts< / name>
<值> -Xmx1024m< /值>
< / property>

所有这些步骤都增加了默认的java堆内存。


I'm facing Out Of Memory error while running a mapreduce program.If I keep 260 files in one folder and give as input to the mapreduce program,it is showing Java Heap space Out of Memory error.If I give only 100 files as input the mapreduce,it is running fine.Then how can I limit the mapreduce program to take only 100 files (~50MB) at a time. Can anyone please suggest on this issue ...

No of files:318 ,No of blocks:1(blocksize:128MB), Hadoop is running on 32 bit system

My StackTrace:
==============
    15/05/05 11:52:47 INFO input.FileInputFormat: Total input paths to process : 318
    15/05/05 11:52:47 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 52027734
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: number of splits:1
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local634564612_0001
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.
end-notification.max.attempts;  Ignoring.
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
    15/05/05 11:52:48 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
    15/05/05 11:52:48 INFO mapreduce.Job: Running job: job_local634564612_0001
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter set in config null
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Waiting for map tasks
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Starting task: attempt_local634564612_0001_m_000000_0
    15/05/05 11:52:48 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
    15/05/05 11:52:48 INFO mapred.MapTask: Processing split: Paths:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0+78550,/user/usr/local/upload/20120713T07-45-43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0+32768,/user/usr/local/upload/20120713T07-45-43.718556000Z_85.26.235.102-25300.c2s_ndttrace:0+10130,/user/usr/local/upload
         .....
         .....
         .....
/20120713T08-33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0+19148,/user/usr/local/upload/20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0+63014,/user/usr/local/upload/20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0+13658,/user/usr/local/upload/20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0+126542,/user/usr/local/upload/20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0+92676,/user/usr/local/upload/20120713T08-34-08.378534000Z_87.7.113.115-62238.c2s_ndttrace:0+49410,/user/usr/local/upload/20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0+2666092
    15/05/05 11:52:49 INFO mapreduce.Job: Job job_local634564612_0001 running in uber mode : false
    15/05/05 11:52:49 INFO mapreduce.Job:  map 0% reduce 0%
    15/05/05 11:52:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    15/05/05 11:52:53 INFO mapred.MapTask: (EQUATOR) 0 kvi 78643196(314572784)
    15/05/05 11:52:53 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 300
    15/05/05 11:52:53 INFO mapred.MapTask: soft limit at 251658240
    15/05/05 11:52:53 INFO mapred.MapTask: bufstart = 0; bufvoid = 314572800
    15/05/05 11:52:53 INFO mapred.MapTask: kvstart = 78643196; length = 19660800
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
    15/05/05 11:52:55 INFO mapred.MapTask: Starting flush of map output
    15/05/05 11:52:55 INFO mapred.MapTask: Spilling map output
    15/05/05 11:52:55 INFO mapred.MapTask: bufstart = 0; bufend = 105296; bufvoid = 314572800
    15/05/05 11:52:55 INFO mapred.MapTask: kvstart = 78643196(314572784); kvend = 78637988(314551952); length = 5209/19660800
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map > map
    15/05/05 11:52:55 INFO mapred.MapTask: Finished spill 0
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map task executor complete.
    15/05/05 11:52:55 WARN mapred.LocalJobRunner: job_local634564612_0001
    java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
    Caused by: java.lang.OutOfMemoryError: Java heap space
        at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208)
        at net.ripe.hadoop.pcap.PcapReader.access$0(PcapReader.java:173)
        at net.ripe.hadoop.pcap.PcapReader$PacketIterator.fetchNext(PcapReader.java:554)
        at net.ripe.hadoop.pcap.PcapReader$PacketIterator.hasNext(PcapReader.java:559)
        at net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57)
        at net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue(CombineBinaryRecordReader.java:42)
        at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    15/05/05 11:52:56 INFO mapreduce.Job: Job job_local634564612_0001 failed with state FAILED due to: NA
    15/05/05 11:52:56 INFO mapreduce.Job: Counters: 25
        File System Counters
            FILE: Number of bytes read=29002348
            FILE: Number of bytes written=29450636
            FILE: Number of read operations=0
            FILE: Number of large read operations=0
            FILE: Number of write operations=0
            HDFS: Number of bytes read=103142
            HDFS: Number of bytes written=0
            HDFS: Number of read operations=6
            HDFS: Number of large read operations=0
            HDFS: Number of write operations=1
        Map-Reduce Framework
            Map input records=1303
            Map output records=1303
            Map output bytes=105296
            Map output materialized bytes=0
            Input split bytes=38078
            Combine input records=0
            Spilled Records=0
            Failed Shuffles=0
            Merged Map outputs=0
            GC time elapsed (ms)=593
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0
            Total committed heap usage (bytes)=1745092608
        File Input Format Counters 
            Bytes Read=0

解决方案

STEP 1:

Add this line in .bashrc file found in your hadoop home directory:

export JVM_ARGS="-Xms1024m -Xmx1024m"

This changes the java heap memory to 1024. Default is 128. If you were running a terminal hadoop job, then do this as hadoop user:

source ~/.bashrc

If you still get the error, try step 2.

STEP 2:

Add this line in hadoop-env.sh file:

export HADOOP_CLIENT_OPTS="-Xmx1024m $HADOOP_CLIENT_OPTS"

If still there is no luck, try step 3.

STEP 3:

Add this property in mapred-site.xml file:

  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>
  </property>

All these steps increases the default java heap memory.

这篇关于“Java堆空间内存不足错误”同时运行mapreduce程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆