hadoop中的MultipleOutputFormat [英] MultipleOutputFormat in hadoop

查看:24
本文介绍了hadoop中的MultipleOutputFormat的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是 Hadoop 的新手.我正在试用 Wordcount 程序.

I'm a newbie in Hadoop. I'm trying out the Wordcount program.

现在要尝试多个输出文件,我使用 MultipleOutputFormat.这个链接帮助我做到了.http://hadoop.apache.org/common/docs/r0.19.0/api/org/apache/hadoop/mapred/lib/MultipleOutputs.html

Now to try out multiple output files, i use MultipleOutputFormat. this link helped me in doing it. http://hadoop.apache.org/common/docs/r0.19.0/api/org/apache/hadoop/mapred/lib/MultipleOutputs.html

在我的司机课上

    MultipleOutputs.addNamedOutput(conf, "even",
            org.apache.hadoop.mapred.TextOutputFormat.class, Text.class,
            IntWritable.class);

    MultipleOutputs.addNamedOutput(conf, "odd",
            org.apache.hadoop.mapred.TextOutputFormat.class, Text.class,
            IntWritable.class);`

我的reduce类变成了这个

and my reduce class became this

public static class Reduce extends MapReduceBase implements
        Reducer<Text, IntWritable, Text, IntWritable> {
    MultipleOutputs mos = null;

    public void configure(JobConf job) {
        mos = new MultipleOutputs(job);
    }

    public void reduce(Text key, Iterator<IntWritable> values,
            OutputCollector<Text, IntWritable> output, Reporter reporter)
            throws IOException {
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }
        if (sum % 2 == 0) {
            mos.getCollector("even", reporter).collect(key, new IntWritable(sum));
        }else {
            mos.getCollector("odd", reporter).collect(key, new IntWritable(sum));
        }
        //output.collect(key, new IntWritable(sum));
    }
    @Override
    public void close() throws IOException {
        // TODO Auto-generated method stub
    mos.close();
    }
}

一切正常,但我得到了很多文件,(每个 map-reduce 一个奇数和一个偶数)

Things worked , but i get LOT of files, (one odd and one even for every map-reduce)

问题是:我怎样才能只有 2 个输出文件(奇数和偶数),以便每个 map-reduce 的每个奇数输出都写入该奇数文件,而偶数也是如此.

Question is : How can i have just 2 output files (odd & even) so that every odd output of every map-reduce gets written into that odd file, and same for even.

推荐答案

每个 reducer 使用一个 OutputFormat 来写入记录.所以这就是为什么每个减速器都会得到一组奇数和偶数文件的原因.这是设计使然,每个减速器都可以并行执行写入.

Each reducer uses an OutputFormat to write records to. So that's why you are getting a set of odd and even files per reducer. This is by design so that each reducer can perform writes in parallel.

如果您只需要一个奇数和一个偶数文件,则需要将 mapred.reduce.tasks 设置为 1.但性能会受到影响,因为所有映射器都将输入到一个减速器中.

If you want just a single odd and single even file, you'll need to set mapred.reduce.tasks to 1. But performance will suffer, because all the mappers will be feeding into a single reducer.

另一种选择是更改读取这些文件的进程以接受多个输入文件,或者编写一个单独的进程将这些文件合并在一起.

Another option is to change the process the reads these files to accept multiple input files, or write a separate process that merges these files together.

这篇关于hadoop中的MultipleOutputFormat的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆