更改 Hadoop 中的文件拆分大小 [英] Change File Split size in Hadoop

查看:50
本文介绍了更改 Hadoop 中的文件拆分大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 HDFS 目录中有一堆小文件.虽然文件的体积相对较小,但每个文件的处理时间是巨大的.也就是说,一个 64mb 文件,它是 TextInputFormat 的默认分割大小,甚至需要几个小时来处理.

I have a bunch of small files in an HDFS directory. Although the volume of the files is relatively small, the amount of processing time per file is huge. That is, a 64mb file, which is the default split size for TextInputFormat, would take even several hours to be processed.

我需要做的是减小分割大小,以便我可以利用更多节点来完成一项工作.

What I need to do, is to reduce the split size, so that I can utilize even more nodes for a job.

所以问题是,如何通过 10kb 来分割文件?我是否需要为此实现我自己的 InputFormatRecordReader,或者是否需要设置任何参数?谢谢.

So the question is, how is it possible to split the files by let's say 10kb? Do I need to implement my own InputFormat and RecordReader for this, or is there any parameter to set? Thanks.

推荐答案

可以为每个作业单独设置的参数 mapred.max.split.size 是您所需要的寻找.不要更改 dfs.block.size 因为这对 HDFS 来说是全局的,并且可能会导致问题.

The parameter mapred.max.split.size which can be set per job individually is what you looking for. Don't change dfs.block.size because this is global for HDFS and can lead to problems.

这篇关于更改 Hadoop 中的文件拆分大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆