hadoop容器遇难,但工作成功 [英] hadoop container killed but job succeed

查看:175
本文介绍了hadoop容器遇难,但工作成功的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在hadoop上执行一个map reduce程序。
当我在Macbook上提交jar并在桌面上运行作业时,作业失败,容器超出虚拟内存限制。但是
http:// master-hadoop:8088 / cluster 告诉我我的工作结果看起来是正确的。

您可以看到所使用的物理内存大小为170MB,而所用的虚拟内存大小为17.8GB。输入的文件只有10MB。



我无法弄清楚为什么程序使用这么多的虚拟内存,为什么hadoop说我的工作成功所以这可能是因为容器被杀死的结果。

  16/11/07 21:31:40信息加入:20161107213140620 
16/11/07 21:31:41 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类
16/11/07 21:31:42 INFO client.RMProxy:在master-hadoop / 192.168.199.162:8032上连接到ResourceManager
16/11/07 21:31:43警告mapreduce.JobResourceUploader:未执行Hadoop命令行选项解析。实施工具界面并使用ToolRunner执行您的应用程序以解决此问题。
16/11/07 21:31:44 INFO input.FileInputFormat:要输入的总输入路径:2
16/11/07 21:31:44信息mapreduce.JobSubmitter:分割数目:2
16/11/07 21:31:44信息mapreduce.JobSubmitter:提交作业的标记:job_1478524274348_0001
16/11/07 21:31:46 INFO impl.YarnClientImpl:提交的应用程序application_1478524274348_0001
16/11/07 21:31:46信息mapreduce.Job:跟踪工作的URL:http:// master-hadoop:8088 / proxy / application_1478524274348_0001 /
16/11/07 21:31: 46信息mapreduce.Job:正在运行的作业:job_1478524274348_0001
16/11/07 21:31:55信息mapreduce.Job:作业job_1478524274348_0001以超级模式运行:false
16/11/07 21:31: 55信息mapreduce.Job:地图0%减少0%
16/11/07 21:32:04信息mapreduce.Job:地图100%减少0%
16/11/07 21:32: 11信息mapreduce.Job:地图100%减少100%
16/11/07 21:32:12信息mapreduce.Job:作业job_1478524274348_0001成功完成
16/11/07 21:32:12信息mapreduce.Job:伯爵ers:49
文件系统计数器
FILE:读取的字节数= 1974092
FILE:写入的字节数= 4301313
FILE:读取操作数量= 0
FILE:大量读取操作的数量= 0
FILE:写入操作的数量= 0
HDFS:读取的字节数= 20971727
HDFS:写入的字节数= 23746
HDFS :读取操作次数= 9
HDFS:大量读取操作次数= 0
HDFS:写入操作次数= 2
作业计数器
启动地图任务= 2
启动减少任务= 1
数据本地地图任务= 2
占用时隙中所有地图花费的总时间(ms)= 13291
占用时隙中所有减少花费的总时间(ms) )= 3985
所有地图任务花费的总时间(ms)= 13291
所有减少任务花费的总时间(ms)= 3985
所有地图任务花费的总计vcore-millisecond = 13291
所有reduce任务的总核心毫秒数= 3985
所有地图任务的总兆字节毫秒数= 13609984
所有reduce任务的总兆字节毫秒数= 4080640
Map-Reduce Framework
地图输入记录= 162852
地图输出记录= 162852
地图输出字节= 1648382
地图输出物化字节= 1974098
输入拆分字节= 207
组合输入记录= 0
组合输出记录= 0
减少输入组= 105348
减少随机字节= 1974098
减少输入记录= 162852
减少输出记录= 4423
溢出记录= 325704
乱序地图= 2
失败Shuffles = 0
合并地图输出= 2
GC时间流逝(ms)= 364
CPU时间花费(ms)= 6300
物理内存(字节)快照= 705949696
虚拟内存(字节)快照= 5738041344
总承诺堆使用率(字节)= 492830720
随机错误
BAD_ID = 0
CONNECTION = 0
IO_ERROR = 0
WRONG_LENGTH = 0
WRONG_MAP = 0
WRONG_REDUCE = 0
文件输入格式计数器
字节读取= 20971520
文件输出格式计数器
字节写入= 23746
16/11/07 21: 32:12 INFO client.RMProxy:在master-hadoop / 192.168.199.162上连接到ResourceManager:8032
16/11/07 21:32:12警告mapreduce.JobResourceUploader:未执行Hadoop命令行选项解析。实施工具界面并使用ToolRunner执行您的应用程序以解决此问题。
16/11/07 21:32:12 INFO input.FileInputFormat:要输入的总输入路径:2
16/11/07 21:32:12信息mapreduce.JobSubmitter:分割数量:2
16/11/07 21:32:13信息mapreduce.JobSubmitter:提交作业的标记:job_1478524274348_0002
16/11/07 21:32:13 INFO impl.YarnClientImpl:提交的应用程序application_1478524274348_0002
16/11/07 21:32:13信息mapreduce.Job:跟踪作业的URL:http:// master-hadoop:8088 / proxy / application_1478524274348_0002 /
16/11/07 21:32: 13信息mapreduce.Job:正在运行的作业:job_1478524274348_0002
16/11/07 21:32:24信息mapreduce.Job:作业job_1478524274348_0002以超级模式运行:false
16/11/07 21:32: 24 INFO mapreduce.Job:map 0%reduce 0%
16/11/07 21:32:32 INFO mapreduce.Job:map 100%reduce 0%
16/11/07 21:32: 38 INFO mapreduce.Job:任务ID:attempt_1478524274348_0002_r_000000_0,状态:FAILED
容器[pid = 4170,containerID = container_1478524274348_0002_01_000004]超越vir实质内存限制。当前使用情况:使用1 GB物理内存170.0 MB;使用了17.8 GB的2.1 GB虚拟内存。杀死容器。
转储container_1478524274348_0002_01_000004的进程树:
| - PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS)SYSTEM_TIME(MILLIS)VMEM_USAGE(BYTES)RSSMEM_USAGE(PAGES)FULL_CMD_LINE
| - 4174 4170 4170 4170(java)407 30 19121176576 42828 / usr / lib / jvm / java-8 -openjdk-amd64 / bin / java -Djava.net.preferIPv4Stack = true -Dhadoop.metrics.log.level = WARN -Xmx200m -Djava.io .tmpdir = / usr / local / hadoop / tmp / nm-local-dir / usercache / lining / appcache / application_1478524274348_0002 / container_1478524274348_0002_01_000004 / tmp -Dlog4j.configuration = container-log4j.properties -Dyarn.app.container.log.dir = / usr / local / hadoop / logs / userlogs / application_1478524274348_0002 / container_1478524274348_0002_01_000004 -Dyarn.app.container.log.filesize = 0 -Dhadoop.root.logger = INFO,CLA -Dhadoop.root.logfile = syslog -Dyarn.app.mapreduce .shuffle.logger = INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile = syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize = 0 -Dyarn.app.mapreduce.shuffle.lo g.backups = 0 org.apache.hadoop.mapred.YarnChild 127.0.1.1 33077 attempt_1478524274348_0002_r_000000_0 4
| - 4170 4168 4170 4170(bash)0 0 17051648 700 / bin / bash -c / usr / lib / jvm / java-8-openjdk-amd64 / bin / java -Djava.net.preferIPv4Stack = true -Dhadoop.metrics.log.level = WARN -Xmx200m -Djava.io.tmpdir = / usr / local / hadoop / tmp / nm-local -dir / usercache / lining / appcache / application_1478524274348_0002 / container_1478524274348_0002_01_000004 / tmp -Dlog4j.configuration = container-log4j.properties -Dyarn.app.container.log.dir = / usr / local / hadoop / logs / userlogs / application_1478524274348_0002 / container_1478524274348_0002_01_000004 - Dyarn.app.container.log.filesize = 0 -Dhadoop.root.logger = INFO,CLA -Dhadoop.root.logfile = syslog -Dyarn.app.mapreduce.shuffle.logger = INFO,shuffleCLA -Dyarn.app.mapreduce。 shuffle.logfile = syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize = 0 -Dyarn.app.mapreduce.shuffle.log.backups = 0 org.apache.hadoop.mapred.YarnChild 127.0.1.1 33077 attempt_1478524274348_0002_r_000000_0 4 1 > / usr / local / hadoop / logs / userlogs / application_1478524274348_0002 / container_1478524274348_0002_01_000004 / stdout 2> / usr / local / hadoop / logs / userlogs / application_1478524274348_0002 / container_1478524274348_0002_01_000004 / stderr

根据请求死亡的容器。退出代码是143
使用非零退出代码退出的容器143

16/11/07 21:32:47信息mapreduce.Job:map 100%减少100%
16/11/07 21:32:48信息mapreduce.Job:Job job_1478524274348_0002已成功完成
16/11/07 21:32:48信息mapreduce.Job:计数器:50
文件系统计数器
FILE:读取的字节数= 3373558
FILE:写入的字节数= 7100224
FILE:读取操作数量= 0
FILE:大量读取操作数量= 0
FILE:写入操作次数= 0
HDFS:读取的字节数= 21019219
HDFS:写入的字节数= 307797
HDFS:读取操作次数= 15
HDFS:大量读取操作的数量= 0
HDFS:写入操作的数量= 2
作业计数器
失败的减少任务= 1
启动的地图任务= 2
启动减少任务= 2
数据本地地图任务= 2
总计ti (ms)= 12513
所有地图任务花费的总时间(毫秒)= 7584
所有地图任务花费的总时间(毫秒)= 12513
所有减少任务花费的总时间(ms)= 7584
所有映射任务花费的总核心毫秒数= 12513
所有减少任务花费的总核心毫秒数= 7584
总兆字节 - 所有地图任务所用的毫秒数= 12813312
所有减少任务所用的总兆字节毫秒数= 7766016
Map-Reduce框架
地图输入记录数= 162852
地图输出记录数= 22115
地图输出字节数= 3315932
地图输出物化字节数= 3373564
输入分割字节数= 207
组合输入记录= 0
组合输出记录= 0
减少输入组= 177
减少随机字节= 3373564
减少输入记录= 22115
减少输出记录= 17692
Spilled Records = 44230
Shuffled Maps = 2
Failed Shuffles = 0
合并地图输出= 2
GC时间流逝(ms)= 381
CPU花费的时间(ms )= 5320
物理内存(字节)快照= 727543808
虚拟内存(字节)快照= 22958596096
总承诺堆使用率(字节)= 493355008
随机错误
BAD_ID = 0
CONNECTION = 0
IO_ERROR = 0
WRONG_LENGTH = 0
WRONG_MAP = 0
WRONG_REDUCE = 0
文件输入格式计数器
Bytes Read = 20971520
文件输出格式计数器
写入的字节数= 307797
16/11/07 21:32:48信息加入:20161107213248192


解决方案

您的某个reduce任务的第一次尝试失败,但很可能已重新安排并成功完成,这就是为什么您的整个工作报告成功。

I'm trying to execute a map reduce program on hadoop. When I submitted the jar on my Macbook and run the job at a desktop, a job failed with container beyond virtual memory limits. But the http://master-hadoop:8088/cluster told me that my job succeed with the result seem to be correct.

You can see that the physical memory used is of size 170MB while the virtual memory used is of size 17.8GB. And the file inputed is of only 10MB.

What I can't figure out is that why the program is using so much virtual memory and why hadoop said that my job succeed and so it is the result possibly since the container had been killed.

16/11/07 21:31:40 INFO Join: 20161107213140620
16/11/07 21:31:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/07 21:31:42 INFO client.RMProxy: Connecting to ResourceManager at master-hadoop/192.168.199.162:8032
16/11/07 21:31:43 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/11/07 21:31:44 INFO input.FileInputFormat: Total input paths to process : 2
16/11/07 21:31:44 INFO mapreduce.JobSubmitter: number of splits:2
16/11/07 21:31:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1478524274348_0001
16/11/07 21:31:46 INFO impl.YarnClientImpl: Submitted application application_1478524274348_0001
16/11/07 21:31:46 INFO mapreduce.Job: The url to track the job: http://master-hadoop:8088/proxy/application_1478524274348_0001/
16/11/07 21:31:46 INFO mapreduce.Job: Running job: job_1478524274348_0001
16/11/07 21:31:55 INFO mapreduce.Job: Job job_1478524274348_0001 running in uber mode : false
16/11/07 21:31:55 INFO mapreduce.Job:  map 0% reduce 0%
16/11/07 21:32:04 INFO mapreduce.Job:  map 100% reduce 0%
16/11/07 21:32:11 INFO mapreduce.Job:  map 100% reduce 100%
16/11/07 21:32:12 INFO mapreduce.Job: Job job_1478524274348_0001 completed successfully
16/11/07 21:32:12 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=1974092
        FILE: Number of bytes written=4301313
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=20971727
        HDFS: Number of bytes written=23746
        HDFS: Number of read operations=9
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=2
        Launched reduce tasks=1
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=13291
        Total time spent by all reduces in occupied slots (ms)=3985
        Total time spent by all map tasks (ms)=13291
        Total time spent by all reduce tasks (ms)=3985
        Total vcore-milliseconds taken by all map tasks=13291
        Total vcore-milliseconds taken by all reduce tasks=3985
        Total megabyte-milliseconds taken by all map tasks=13609984
        Total megabyte-milliseconds taken by all reduce tasks=4080640
    Map-Reduce Framework
        Map input records=162852
        Map output records=162852
        Map output bytes=1648382
        Map output materialized bytes=1974098
        Input split bytes=207
        Combine input records=0
        Combine output records=0
        Reduce input groups=105348
        Reduce shuffle bytes=1974098
        Reduce input records=162852
        Reduce output records=4423
        Spilled Records=325704
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=364
        CPU time spent (ms)=6300
        Physical memory (bytes) snapshot=705949696
        Virtual memory (bytes) snapshot=5738041344
        Total committed heap usage (bytes)=492830720
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=20971520
    File Output Format Counters 
        Bytes Written=23746
16/11/07 21:32:12 INFO client.RMProxy: Connecting to ResourceManager at master-hadoop/192.168.199.162:8032
16/11/07 21:32:12 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/11/07 21:32:12 INFO input.FileInputFormat: Total input paths to process : 2
16/11/07 21:32:12 INFO mapreduce.JobSubmitter: number of splits:2
16/11/07 21:32:13 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1478524274348_0002
16/11/07 21:32:13 INFO impl.YarnClientImpl: Submitted application application_1478524274348_0002
16/11/07 21:32:13 INFO mapreduce.Job: The url to track the job: http://master-hadoop:8088/proxy/application_1478524274348_0002/
16/11/07 21:32:13 INFO mapreduce.Job: Running job: job_1478524274348_0002
16/11/07 21:32:24 INFO mapreduce.Job: Job job_1478524274348_0002 running in uber mode : false
16/11/07 21:32:24 INFO mapreduce.Job:  map 0% reduce 0%
16/11/07 21:32:32 INFO mapreduce.Job:  map 100% reduce 0%
16/11/07 21:32:38 INFO mapreduce.Job: Task Id : attempt_1478524274348_0002_r_000000_0, Status : FAILED
Container [pid=4170,containerID=container_1478524274348_0002_01_000004] is running beyond virtual memory limits. Current usage: 170.0 MB of 1 GB physical memory used; 17.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1478524274348_0002_01_000004 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 4174 4170 4170 4170 (java) 407 30 19121176576 42828 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx200m -Djava.io.tmpdir=/usr/local/hadoop/tmp/nm-local-dir/usercache/lining/appcache/application_1478524274348_0002/container_1478524274348_0002_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1478524274348_0002/container_1478524274348_0002_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 127.0.1.1 33077 attempt_1478524274348_0002_r_000000_0 4 
    |- 4170 4168 4170 4170 (bash) 0 0 17051648 700 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx200m -Djava.io.tmpdir=/usr/local/hadoop/tmp/nm-local-dir/usercache/lining/appcache/application_1478524274348_0002/container_1478524274348_0002_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1478524274348_0002/container_1478524274348_0002_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dyarn.app.mapreduce.shuffle.logger=INFO,shuffleCLA -Dyarn.app.mapreduce.shuffle.logfile=syslog.shuffle -Dyarn.app.mapreduce.shuffle.log.filesize=0 -Dyarn.app.mapreduce.shuffle.log.backups=0 org.apache.hadoop.mapred.YarnChild 127.0.1.1 33077 attempt_1478524274348_0002_r_000000_0 4 1>/usr/local/hadoop/logs/userlogs/application_1478524274348_0002/container_1478524274348_0002_01_000004/stdout 2>/usr/local/hadoop/logs/userlogs/application_1478524274348_0002/container_1478524274348_0002_01_000004/stderr  

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

16/11/07 21:32:47 INFO mapreduce.Job:  map 100% reduce 100%
16/11/07 21:32:48 INFO mapreduce.Job: Job job_1478524274348_0002 completed successfully
16/11/07 21:32:48 INFO mapreduce.Job: Counters: 50
    File System Counters
        FILE: Number of bytes read=3373558
        FILE: Number of bytes written=7100224
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=21019219
        HDFS: Number of bytes written=307797
        HDFS: Number of read operations=15
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Failed reduce tasks=1
        Launched map tasks=2
        Launched reduce tasks=2
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=12513
        Total time spent by all reduces in occupied slots (ms)=7584
        Total time spent by all map tasks (ms)=12513
        Total time spent by all reduce tasks (ms)=7584
        Total vcore-milliseconds taken by all map tasks=12513
        Total vcore-milliseconds taken by all reduce tasks=7584
        Total megabyte-milliseconds taken by all map tasks=12813312
        Total megabyte-milliseconds taken by all reduce tasks=7766016
    Map-Reduce Framework
        Map input records=162852
        Map output records=22115
        Map output bytes=3315932
        Map output materialized bytes=3373564
        Input split bytes=207
        Combine input records=0
        Combine output records=0
        Reduce input groups=177
        Reduce shuffle bytes=3373564
        Reduce input records=22115
        Reduce output records=17692
        Spilled Records=44230
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=381
        CPU time spent (ms)=5320
        Physical memory (bytes) snapshot=727543808
        Virtual memory (bytes) snapshot=22958596096
        Total committed heap usage (bytes)=493355008
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=20971520
    File Output Format Counters 
        Bytes Written=307797
16/11/07 21:32:48 INFO Join: 20161107213248192

解决方案

The first attempt of one of your reduce task failed but was most likely rescheduled and then completed successfully which is why your entire job reports success.

这篇关于hadoop容器遇难,但工作成功的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆