控制不会交给hadoop中的reducer [英] Control is not going to the reducer in hadoop

查看:78
本文介绍了控制不会交给hadoop中的reducer的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在hadoop中编写了一个自定义的输入格式和数据类型,它可以读取图像并将其存储到RGB数组中.但是当我在地图上实现reduce函数时,控件并没有转到reducer函数.

I have written a custom inputformat and data type in hadoop, which can read images, store it into RGB array. but when I implement in my map and reduce function, the control does not go to the reducer function.

import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class Image {

    public static class Map extends Mapper<Text, ImageM, Text, ImageM> {

        public void map(Text key, ImageM value, Context context) throws IOException,     
        InterruptedException {
          /*
           for(int i=0;i<value.Height;i++)
           {
               System.out.println();
               for(int j=0;j<value.Width;j++)
               {
                   System.out.print(" "+value.Blue[i][j]);
               }
           }       
           */
           context.write(key, value);


        } 
    }

    public static class Reduce extends Reducer<Text, ImageM, Text, IntWritable> {

        public void reduce(Text key, ImageM value, Context context) 
         throws IOException, InterruptedException {

           for(int i=0;i<value.Height;i++)
           {
               System.out.println();
               for(int j=0;j<value.Width;j++)
               {
                   System.out.print(value.Blue[i][j]+" ");
               }
           }
           IntWritable m = new IntWritable(10);
           context.write(key, m);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();

        Job job = new Job(conf, "wordcount");

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(ImageM.class);

        job.setMapperClass(Map.class);
        job.setReducerClass(Reduce.class);

        job.setInputFormatClass(ImageFileInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        long start = new Date().getTime();    
        job.waitForCompletion(true);
        long end = new Date().getTime();
        System.out.println("Job took "+(end-start) + " milliseconds");
    }

}

地图功能中的键会根据输入格式给出文件名.

Here the key in the map function gives the file name according to the input format.

我得到的输出为"icon2.gif ImageM @ 31093d14"

I get the output as "icon2.gif ImageM@31093d14"

如果仅在映射器中使用我的数据类型,那么一切都很好. 你能猜出问题出在哪里吗?

Every thing is fine if my data type is used only in the mapper. Can u guess where is the problem?

推荐答案

reduce函数签名错误.应该是:

Your reduce function signature is wrong. It should be:

@Override
public void reduce(Text key, Iterable<ImageM> values, Context context) 
     throws IOException, InterruptedException

请使用@Override批注让编译器为您发现此错误.

Please use the @Override annotation to let the compiler spot this error for you.

这篇关于控制不会交给hadoop中的reducer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆