Hadoop Mapreduce控制台输出的说明 [英] Explanation for Hadoop Mapreduce Console Output

查看:2177
本文介绍了Hadoop Mapreduce控制台输出的说明的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是hadoop环境中的新手。我已经设置了2个节点集群hadoop。然后我运行示例mapreduce应用程序。 (实际上是字数)。然后我得到这样的输出

I am newbie in hadoop environment. I already set up 2 node cluster hadoop. then I run sample mapreduce application. (wordcount actually). then I got output like this

    File System Counters
    FILE: Number of bytes read=492
    FILE: Number of bytes written=6463014
    FILE: Number of read operations=0
    FILE: Number of large read operations=0
    FILE: Number of write operations=0
    HDFS: Number of bytes read=71012
    HDFS: Number of bytes written=195
    HDFS: Number of read operations=404
    HDFS: Number of large read operations=0
    HDFS: Number of write operations=2
Job Counters 
    Launched map tasks=80
    Launched reduce tasks=1
    Data-local map tasks=80
    Total time spent by all maps in occupied slots (ms)=429151
    Total time spent by all reduces in occupied slots (ms)=72374
Map-Reduce Framework
    Map input records=80
    Map output records=8
    Map output bytes=470
    Map output materialized bytes=966
    Input split bytes=11040
    Combine input records=0
    Combine output records=0
    Reduce input groups=1
    Reduce shuffle bytes=966
    Reduce input records=8
    Reduce output records=5
    Spilled Records=16
    Shuffled Maps =80
    Failed Shuffles=0
    Merged Map outputs=80
    GC time elapsed (ms)=5033
    CPU time spent (ms)=59310
    Physical memory (bytes) snapshot=18515763200
    Virtual memory (bytes) snapshot=169808543744
    Total committed heap usage (bytes)=14363394048
Shuffle Errors
    BAD_ID=0
    CONNECTION=0
    IO_ERROR=0
    WRONG_LENGTH=0
    WRONG_MAP=0
    WRONG_REDUCE=0
File Input Format Counters 
    Bytes Read=29603
File Output Format Counters 
    Bytes Written=195

我有关于每个数据的任何解释吗?特别是

Are there any explanation about every data which I got? especially,


  1. 所有地图在占用时间段内花费的总时间(ms)


  2. 物理内存(字节)

  3. 虚拟内存(字节)快照

  4. 总承诺堆使用量(字节)

  1. Total time spent by all maps in occupied slots (ms)
  2. Total time spent by all reduces in occupied slots (ms)
  3. CPU time spent (ms)
  4. Physical memory (bytes)
  5. Virtual memory (bytes) snapshot
  6. Total committed heap usage (bytes)


推荐答案

Mapreduce框架在作业提交执行时维护计数器。这些计数器显示给用户以便查看作业统计信息,并查看基准和性能分析。您的作业输出显示了一些计数器。有一个很好的解释在确定指南第8章关于计数器,我建议你检查一次。

Mapreduce framwork maintains counters while the job has been submitted for execution. These counters are shown to user for understaing job statistics and to see benchmarks and performance analysis. Your job output has shown you some of the counters. There is a good explanation in definitive guide chapter 8 about the counters, i suggest you to check it once.

要说明您要求的项目,

1)所有地图花费的总时间 - 运行映射任务的时间(以毫秒为单位)。包括以推测方式启动的任务
(投机意味着在等待指定时间后运行失败或缓慢的作业,在预测作业中意味着重新运行任何特定的地图任务)。

1) Total time spent by all maps - The total time taken running map tasks in milliseconds. Includes tasks that were started speculatively (Speculative means running a failed or slow job after waiting for specified time, in lament terms a speculative job means re-run of any particular map task).

2)所有减少的总时间 - 以毫秒为单位运行reduce任务的总时间。

2) Total time spent by all reduces - The total time taken running reduce tasks in milliseconds.

3)CPU时间 - 累积CPU时间对于任务,以毫秒为单位
4)物理内存 - 任务使用的物理内存(以字节为单位),内存此处也计算用于溢出的总内存。
5)总虚拟内存 - 任务使用的虚拟内存(字节)

3) CPU Time - The cumulative CPU time for a task in milliseconds 4) Physical memory - The physical memory being used by a task in bytes, memory here counts the total memory used for spills as well. 5) Total virtual memory - The virtual memory being used by a task in bytes

6)总承诺堆使用量 - 可用内存总量JVM in bytes

6) Total committed heap usage - The total amount of memory available in the JVM in bytes

希望这有帮助。

谢谢。

RAM是处理作业时使用的主内存。数据将被带到RAM并且作业被处理保持在RAM中。但是,数据可能大于分配的RAM大小。在这种情况下,操作系统将数据保存在磁盘中,并将其交换到RAM中和从RAM中交换,以允许甚至更小的RAM足以用于存储器中较高的文件。例如:RAM是64MB,并且假设文件大小是128MB,则64MB将首先保存在RAM中,其他64在DISK中,并交换它。虽然它不会保持它为64MB和64 MB,内部它分为段/页。

RAM is the primary memory that is used when processing a job. The data will be brought to RAM and job gets processed keep it there in RAM. But, data might be bigger that the RAM size allocated. In such scenarios, Operating system keeps the data in Disk and swaps it to and from RAM to allow even lessar RAM is sufficient for files those are higher in memory. for eg: RAM is 64MB, and suppose if the file size is 128 MB, then 64MB will be kept in RAM first and other 64 in DISK, and swaps it. Though it wont keep it as 64MB and 64 MB, internally it divides into segments/pages.

我只是举个例子来理解。虚拟内存是一种通过使用页面和使用DISK和RAM进行交换来处理大于RAM的文件的概念。所以对于上面的情况,它实际上使用64 MB从磁盘作为RAM,因此它被称为虚拟内存。

I just gave an example to understand. A virtual memory is a concept to work for files bigger than RAM by using the pages and swapping with DISK and RAM. So for above case, it virtually using 64 MB from Disk as RAM so it is called as Virtual memory.

希望你明白。如果你满意的答案,请接受它作为答案。如果您有任何问题,请与我们联系。

Hope you understand. If you satisfied with the answer, please accept it as answer. Let me know if you have any questions.

使用JVM_OPTS在命令行中设置用于对象存储的JVM内存。通常所有的Java程序都需要这些设置。

Heap the JVM memory used for object store, which is set using JVM_OPTS in command line. Normally all java programs need to have these settings.

这篇关于Hadoop Mapreduce控制台输出的说明的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆