Hadoop 输入拆分大小与块大小 [英] Hadoop input split size vs block size

查看:21
本文介绍了Hadoop 输入拆分大小与块大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在阅读 hadoop 权威指南,其中清楚地解释了输入拆分.就像

I am going through hadoop definitive guide, where it clearly explains about input splits. It goes like

输入拆分不包含实际数据,而是具有存储HDFS 上数据的位置

Input splits doesn’t contain actual data, rather it has the storage locations to data on HDFS

通常,输入拆分的大小与块大小相同

Usually,Size of Input split is same as block size

1) 假设一个 64MB 的块在节点 A 上并在其他 2 个节点 (B,C) 之间复制,并且 map-reduce 程序的输入拆分大小为 64MB,这个拆分是否会只是有节点 A 的位置?或者它是否有所有三个节点 A、b、C 的位置?

1) let’s say a 64MB block is on node A and replicated among 2 other nodes(B,C), and the input split size for the map-reduce program is 64MB, will this split just have location for node A? Or will it have locations for all the three nodes A,b,C?

2) 由于数据对于所有三个节点都是本地的,框架如何决定(选择)在特定节点上运行的 maptask?

2) Since data is local to all the three nodes how the framework decides(picks) a maptask to run on a particular node?

3) 如果 Input Split 大小大于或小于块大小,如何处理?

3) How is it handled if the Input Split size is greater or lesser than block size?

推荐答案

  • @user1668782 的回答很好地解释了这个问题,我将尝试对其进行图形化描述.

    • The answer by @user1668782 is a great explanation for the question and I'll try to give a graphical depiction of it.

      假设我们有一个 400MB 的文件,其中包含 4 条记录(例如:400MB 的 csv 文件,它有 4行,每行 100MB)

      Assume we have a file of 400MB with consists of 4 records(e.g : csv file of 400MB and it has 4 rows, 100MB each)

      • 如果 HDFS Block Size 配置为 128MB,那么这 4 条记录将不会均匀分布在各个块中.它看起来像这样.
      • If the HDFS Block Size is configured as 128MB, then the 4 records will not be distributed among the blocks evenly. It will look like this.

      • 块 1 包含整个第一条记录和第二条记录的 28MB 块.
      • 如果要在块 1 上运行映射器,则映射器无法处理,因为它不会拥有完整的第二条记录.
      • 这正是输入拆分解决的问题.输入拆分尊重逻辑记录边界.

      • Block 1 contains the entire first record and a 28MB chunk of the second record.
      • If a mapper is to be run on Block 1, the mapper cannot process since it won't have the entire second record.
      • This is the exact problem that input splits solve. Input splits respects logical record boundaries.

      假设输入分割大小为200MB

      • 因此输入拆分1应该同时包含记录1和记录2.输入拆分2不会从记录2开始,因为记录2已分配给输入拆分1.输入拆分 2 将从记录 3 开始.

      • Therefore the input split 1 should have both the record 1 and record 2. And input split 2 will not start with the record 2 since record 2 has been assigned to input split 1. Input split 2 will start with record 3.

      这就是为什么输入拆分只是数据的逻辑块.它用 in 块指向开始和结束位置.

      This is why an input split is only a logical chunk of data. It points to start and end locations with in blocks.

      希望这会有所帮助.

      这篇关于Hadoop 输入拆分大小与块大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆