伪分布式数字地图和减少任务 [英] pseudo distributed number map and reduce tasks

查看:112
本文介绍了伪分布式数字地图和减少任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是Hadoop的新手。我已经成功配置了伪分布式模式的hadoop设置。现在我想知道选择地图数量和减少任务的逻辑是什么。我们指的是什么?



谢谢 您无法一概而论如何设置mappers / reducers的数量。



映射器数量:您不能将映射器数量明确设置为某个数字(有些参数可以设置它,但它不生效)。这是由hadoop为给定的输入集创建的输入分割数决定的。你可以通过设置 mapred.min.split.size 参数来控制它。有关更多信息,请阅读此处的InputSplit部分。如果由于大量小文件而产生大量映射器,并且想要减少映射器的数量,那么您将需要合并来自多个文件的数据。阅读:如何将输入文件组合到一个映射器并控制一些映射器

引用维基页面:


的地图通常由
中输入文件的DFS块数量驱动。尽管这会导致人们调整自己的DFS块
大小来调整地图数量。对于
地图而言,正确的并行度水平似乎大约为10-100个地图/节点,尽管我们已经将它从
提升到300左右,以实现非常轻的cpu地图任务。任务设置需要一段时间,所以
最好是地图至少需要一分钟才能执行。



实际上,控制地图的数量是微妙的。
mapred.map.tasks参数仅仅是
地图数量的InputFormat提示。默认的InputFormat行为是将总
字节数拆分为正确的碎片数。但是,在
默认情况下,输入文件的DFS块大小被视为输入拆分的
上限。分割大小的下限可以是通过mapred.min.split.size设置的
。因此,如果您希望10TB的输入数据
并具有128MB的DFS块,则最终将获得82k个地图,除非您的
mapred.map.tasks更大。最终,InputFormat决定
的地图数量。

地图任务的数量也可以使用
JobConf的conf.setNumMapTasks(int NUM)。这可用于增加
地图任务的数量,但不会将该数字设置为低于
Hadoop通过分割输入数据所确定的数目。


减速器数量
您可以明确设置减速器数量。只需设置参数 mapred.reduce.tasks 即可。设置此号码有
指导方针,但通常情况下,默认的减速器数量应足够好。有时候需要单个报告文件,在这种情况下,您可能希望减少数量设置为1.



再次引用wiki:


减少的正确数量似乎是0.95或1.75 *(节点*
mapred.tasktracker.tasks.maximum)。在0.95时,所有的缩减可以立即启动
,并在地图
完成时开始传输地图输出。在1.75时,更快的节点将完成他们的第一轮
减少,并启动第二轮减少,从而更好地完成
的负载平衡。

目前减少的数量被输出文件的
缓冲区大小(io.buffer.size * 2 * numReduces <=
heapSize)限制为大约1000。这将在某些时候被固定,但直到它是它
提供了一个非常坚定的上限。



减少的数量还控制输出文件的数量在
输出目录中,但通常这并不重要,因为下一个
map / reduce步骤会将它们分割成更小的分割图。



减少任务的数量也可以通过JobConf的conf.setNumReduceTasks(int num)以与
地图任务相同的方式增加。



I am newbie to Hadoop. I have successfully configured a hadoop setup in pseudo distributed mode. Now I would like to know what's the logic of choosing the number of map and reduce tasks. What do we refer to?

Thanks

解决方案

You cannot generalize how number of mappers/reducers are to be set.

Number of Mappers: You cannot set number of mappers explicitly to a certain number(There are parameters to set this but it doesn't come into effect). This is decided by the number of Input Splits created by hadoop for your given set of input. You may control this by setting mapred.min.split.size parameter. For more read the InputSplit section here. If you have a lot of mappers being generated due to huge amount of small files and you want to reduce number of mappers then you will need to combine data from more than one files. Read this: How to combine input files to get to a single mapper and control number of mappers.

To quote from the wiki page:

The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.

Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.

The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

Number of Reducers: You can explicitly set the number of reducers. Just set the parameter mapred.reduce.tasks. There are guidelines for setting this number, but usually the default number of reducers should be good enough. At times a single report file is required, in those cases you might want number of reducers to be set to be 1.

Again to quote from wiki:

The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.

Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.

The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.

The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).

这篇关于伪分布式数字地图和减少任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆