Hadoop:在MapReduce期间OutputCollector如何工作? [英] Hadoop: How does OutputCollector work during MapReduce?

查看:236
本文介绍了Hadoop:在MapReduce期间OutputCollector如何工作?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道在map函数中使用的OutputCollector'实例'输出:
output.collect(key,value)
this -output-是否将密钥值对存储在某处?
,即使它发射到reducer函数,它们也必须是中间文件,对吧?
这些文件是什么?它们是否可见并由程序员决定?
是我们在主函数中指定的OutputKeyClass和OutputValueClasses这些存储位置吗? [Text.class和IntWritable.class]



我给出了MapReduce中Word Count示例的标准代码,我们可以在网络中的许多地方找到它。

  public class WordCount {

public static class Map extends MapReduceBase implements Mapper< LongWritable,Text,Text,IntWritable> {
private static static IntWritable one = new IntWritable(1);
私人文字=新文字();
$ b $ public void map(LongWritable key,Text value,OutputCollector< Text,IntWritable> output,Reporter reporter)throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while(tokenizer.hasMoreTokens()){
word.set(tokenizer.nextToken());
output.collect(word,one);



$ b public static class Reduce extends MapReduceBase implements Reducer< Text,IntWritable,Text,IntWritable> {
public void reduce(Text key,Iterator< IntWritable> values,OutputCollector< Text,IntWritable> output,Reporter reporter)throws IOException {
int sum = 0;
while(values.hasNext()){
sum + = values.next()。get();
}
output.collect(key,new IntWritable(sum));



public static void main(String [] args)throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName(wordcount);

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf,new Path(args [0]));
FileOutputFormat.setOutputPath(conf,new Path(args [1]));
JobClient.runJob(conf);
}
}


解决方案

来自Map函数的输出存储在临时中间文件中。这些文件由Hadoop透明地处理,因此在正常情况下,程序员无法访问该文件。如果您对每个映射器内部发生的事情感到好奇,可以查看相应作业的日志,在这些日志中可以找到每个映射任务的日志文件。

如果您想要控制生成临时文件的位置并可以访问它们,您必须创建自己的OutputCollector类,但我不知道该如何轻松那是。

如果您想查看源代码,可以使用svn来获取它。我认为它可以在这里找到: http://hadoop.apache.org/common/version_control.html


I want to know if the OutputCollector's 'instance' output used in the map function: output.collect(key, value) this -output- be storing the key value pairs somewhere? even if it emits to the reducer function, their must be an intermediate file, right? What are those files? Are they visible and decided by the programer? Are the OutputKeyClass, and OutputValueClasses which we specify in the main function these places of storage? [Text.class and IntWritable.class]

Im giving the standard code for Word Count example in MapReduce, which we can find at many places in the net.

public class WordCount {

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}

public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));    
JobClient.runJob(conf);
}
}

解决方案

The output from the Map function is stored in Temporary Intermediate Files. These files are handled transparently by Hadoop, so in a normal scenario, the programmer doesn't have access to that. If you're curious about what's happening inside each mapper, you can review the logs for the respective job where you'll find a log file for each map task.

If you want to control where the temporary files are generated, and have access to them, you have to create your own OutputCollector class, and I don't know how easy that is.

If you want to have a look at the source code, you can use svn to get it. I think it is available here: http://hadoop.apache.org/common/version_control.html.

这篇关于Hadoop:在MapReduce期间OutputCollector如何工作?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆