Apache Pig:java.lang.OutOfMemoryError:请求的数组大小超出了VM限制 [英] Apache Pig: java.lang.OutOfMemoryError: Requested array size exceeds VM limit

查看:106
本文介绍了Apache Pig:java.lang.OutOfMemoryError:请求的数组大小超出了VM限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在运行Pig 15,并试图在此处对数据进行分组.我遇到了Requested array size exceeds VM limit错误.该文件非常小,每个2.5G10映射器都可以运行,而不会出现内存错误.

I'm running Pig 15 and am trying to group data here. I'm running into a Requested array size exceeds VM limit error. The file size is pretty small and takes just 10 mappers of 2.5G each to run with no memory errors.

下面显示的是我正在做的代码片段:

Below shown is a snippet of what I'm doing:

sample_set = LOAD 's3n://<bucket>/<dev_dir>/000*-part.gz' USING PigStorage(',') AS (col1:chararray,col2:chararray..col23:chararray);
sample_set_group_by_col1 = GROUP sample_set BY col1;
sample_set_group_by_col1_10 = LIMIT sample_set_group_by_col1 10;
DUMP sample_set_group_by_col1_10;

此作业失败,并出现以下错误:

This job fails with the following error:

2016-08-08 14:28:59,622 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:401)
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:323)
at org.apache.pig.data.utils.SedesHelper.writeChararray(SedesHelper.java:66)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:580)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:462)
at org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:650)
at org.apache.pig.data.BinInterSedes.writeBag(BinInterSedes.java:641)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:474)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:462)
at org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:650)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:470)
at org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:40)
at org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.java:139)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:198)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord(MapTask.java:1696)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1180)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:712)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:135)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)

以前有人遇到过此错误吗?如果是,该怎么解决?

Has anybody come across this error before? If yes, what's the solution to this?

推荐答案

从错误看来,GROUP BY语句的输出很大.创建的巨大记录不适合当前可用的Java Heap Space.通常,Hadoop映射器和Reducer任务会分配1GB的堆空间.通过以下参数尝试在运行此Pig脚本时增加Java堆大小.

From error it looks like the output of GROUP BY statement is huge. This huge record created doesn't fit into Java Heap Space currently available. Normally Hadoop Mappers and Reducer tasks are allocated 1GB of heap space. Try increasing java heap size while running this pig script by following parameter.

SET mapreduce.map.memory.mb 4096;
SET mapreduce.reduce.memory.mb 6144;

如果仍然无法使用,请尝试使用上述参数进一步增加尺寸.

If it still doesn't work, try increasing the size more using above parameters.

这篇关于Apache Pig:java.lang.OutOfMemoryError:请求的数组大小超出了VM限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆