Hadoop MapReduce - 每个输入一个输出文件 [英] Hadoop MapReduce - one output file for each input

查看:20
本文介绍了Hadoop MapReduce - 每个输入一个输出文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是 Hadoop 的新手,我正在尝试弄清楚它是如何工作的.至于练习,我应该实现类似于 WordCount-Example 的东西.任务是读取几个文件,进行 WordCount 并为每个输入文件编写一个输出文件.Hadoop 使用组合器并将 map-part 的输出打乱作为 reducer 的输入,然后写入一个输出文件(我猜是每个正在运行的实例).我想知道是否可以为每个输入文件编写一个输出文件(所以保留 inputfile1 的单词并将结果写入 outputfile1 等等).是否可以覆盖 Combiner-Class 或者是否有其他解决方案(我不确定这是否应该在 Hadoop-Task 中解决,但这是练习).

I'm new to Hadoop and I'm trying to figure out how it works. As for an exercise I should implement something similar to the WordCount-Example. The task is to read in several files, do the WordCount and write an output file for each input file. Hadoop uses a combiner and shuffles the output of the map-part as an input for the reducer, then writes one output file (I guess for each instance that is running). I was wondering if it is possible to write one output file for each input file (so keep the words of inputfile1 and write result to outputfile1 and so on). Is it possible to overwrite the Combiner-Class or is there another solution for this (I'm not sure if this should even be solved in a Hadoop-Task but this is the exercise).

谢谢...

推荐答案

map.input.file 环境参数具有映射器正在处理的文件名.在映射器中获取此值并将其用作映射器的输出键,然后将单个文件中的所有 k/v 转到一个减速器.

map.input.file environment parameter has the file name which the mapper is processing. Get this value in the mapper and use this as the output key for the mapper and then all the k/v from a single file to go to one reducer.

映射器中的代码.顺便说一句,我正在使用旧的 MR API

The code in the mapper. BTW, I am using the old MR API

@Override
public void configure(JobConf conf) {
    this.conf = conf;
}

@Override.
public void map(................) throws IOException {

        String filename = conf.get("map.input.file");
        output.collect(new Text(filename), value);
}

并使用 MultipleOutputFormat,这允许为作业编写多个输出文件.文件名可以从输出的键和值派生出来.

And use MultipleOutputFormat, this allows to write multiple output files for the job. The file names can be derived from the output keys and values.

这篇关于Hadoop MapReduce - 每个输入一个输出文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆