错误:Java堆空间 [英] Error: Java heap space
问题描述
$ bin / hadoop jar hadoop-examples-1.0.4 .jar grep输入输出'dfs [az。] +'
$ echo $ HADOOP_HEAPSIZE
2000
在日志中,我收到错误信息:
INFO mapred.JobClient:Task Id:attempt_201303251213_0012_m_000000_2 ,
状态:FAILED
错误:Java堆空间 13/03/25 15:03:43信息mapred.JobClient:任务ID:attempt_201303251213_0012_m_000001_2,状态:FAILED错误:
Java堆空间13/03/25 15:04:28信息mapred.JobClient:作业失败:#
失败的地图任务超出允许的限制。 FailedCount:1.
LastFailedTask:task_201303251213_0012_m_000000
java.io.IOException:作业失败!
org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)at
org.apache.hadoop.examples.Grep.run(Grep.java:69)at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at
org.apache.hadoop.examples.Grep.main(Grep.java:93)
让我们知道问题所在。 已经用完了分配给Java的堆大小。所以你应该尝试增加它。
为此,您可以在执行 hadoop
命令之前执行以下操作: p>
export HADOOP_OPTS = - Xmx4096m
或者,您可以通过在 mapred-site.xml
文件中添加以下永久设置来实现同样的目的,该文件位于 HADOOP_HOME / conf /
:
< property>
<名称> mapred.child.java.opts< / name>
<值> -Xmx4096m< /值>
< / property>
这会将您的Java堆空间设置为4096 MB(4GB),您甚至可以尝试使用如果有效的话,首先降低价值。如果这样做不能解决问题,那么如果你的机器支持它,那么增加它,如果没有,那么移动到具有更多内存的机器上,然后在那里尝试。由于堆空间仅仅意味着您没有足够的RAM可用于Java。
更新:对于Hadoop 2+, em> mapreduce.map.java.opts 。
In Ubuntu, when I am running the hadoop example :
$bin/hadoop jar hadoop-examples-1.0.4.jar grep input output 'dfs[a-z.]+'
$echo $HADOOP_HEAPSIZE
2000
In log, I am getting the error as :
INFO mapred.JobClient: Task Id : attempt_201303251213_0012_m_000000_2, Status : FAILED Error: Java heap space 13/03/25 15:03:43 INFO mapred.JobClient: Task Id :attempt_201303251213_0012_m_000001_2, Status : FAILED Error: Java heap space13/03/25 15:04:28 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201303251213_0012_m_000000 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265) at org.apache.hadoop.examples.Grep.run(Grep.java:69) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.Grep.main(Grep.java:93)
Let us know what is the problem.
Clearly you have run out of the heap size allotted to Java. So you shall try to increase that.
For that you may execute the following before executing hadoop
command:
export HADOOP_OPTS="-Xmx4096m"
Alternatively, you can achieve the same thing by adding the following permanent setting in your mapred-site.xml
file, this file lies in HADOOP_HOME/conf/
:
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx4096m</value>
</property>
This would set your java heap space to 4096 MB (4GB), you may even try it with a lower value first if that works. If that too doesn't work out then increase it more if your machine supports it, if not then move to a machine having more memory and try there. As heap space simply means you don't have enough RAM available for Java.
UPDATE: For Hadoop 2+, make the changes in mapreduce.map.java.opts instead.
这篇关于错误:Java堆空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!