在Hadoop中更改文件拆分大小 [英] Change File Split size in Hadoop

查看:118
本文介绍了在Hadoop中更改文件拆分大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在HDFS目录中有一堆小文件。虽然文件的体积相对较小,但每个文件的处理时间很长。也就是说,一个 64mb 文件是 TextInputFormat 的默认拆分大小,需要处理几个小时。



我需要做的是缩小分割大小,以便我可以利用更多节点找工作。



所以问题是,我们说 10kb 是如何分割文件的?我是否需要为此实现自己的 InputFormat RecordReader ,或者是否有任何要设置的参数?谢谢。 可以单独为每个作业设置,就是您要查找的内容。请勿更改 dfs.block.size ,因为这对于HDFS是全球性的,可能会导致问题。

I have a bunch of small files in an HDFS directory. Although the volume of files are relatively small, the amount of processing time per file is huge. That is, a 64mb file, which is the default split size for TextInputFormat, would take even several hours to be processed.

What I need to do, is to reduce the split size, so that I can utilize even more nodes for a job.

So the question is, how is it possible to split the files by let's say 10kb? Do I need to implement my own InputFormat and RecordReader for this, or is there any parameter to set? Thanks.

解决方案

The parameter mapred.max.split.size which can be set per job individually is what you looking for. Don't change dfs.block.size because this is global for HDFS and can lead to problems.

这篇关于在Hadoop中更改文件拆分大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆