映射中键的类型不匹配:预期 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable [英] Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

查看:16
本文介绍了映射中键的类型不匹配:预期 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 java 中运行 map/reducer.以下是我的文件

I am trying to run a map/reducer in java. Below are my files

WordCount.java

WordCount.java

package counter;


public class WordCount extends Configured implements Tool {

public int run(String[] arg0) throws Exception {
    Configuration conf = new Configuration();

    Job job = new Job(conf, "wordcount");

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setMapperClass(WordCountMapper.class);
    job.setReducerClass(WordCountReducer.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path("counterinput"));
    // Erase previous run output (if any)
    FileSystem.get(conf).delete(new Path("counteroutput"), true);
    FileOutputFormat.setOutputPath(job, new Path("counteroutput"));

    job.waitForCompletion(true);
    return 0;
}   

public static void main(String[] args) throws Exception {
    int res = ToolRunner.run(new Configuration(), new WordCount(), args);
    System.exit(res);

    }
}

WordCountMapper.java

WordCountMapper.java

public class WordCountMapper extends
Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
    throws IOException, InterruptedException {
        System.out.println("hi");
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        output.collect(word, one);
        }
    }
}

WordCountReducer.java

WordCountReducer.java

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values,
        OutputCollector<Text,IntWritable> output, Reporter reporter) throws IOException, InterruptedException {
        System.out.println("hello");
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
    }
}

我收到以下错误

13/06/23 23:13:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with  
processName=JobTracker, sessionId=

13/06/23 23:13:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the 
arguments. Applications should implement Tool for the same.
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.JobClient: Running job: job_local_0001
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.MapTask: io.sort.mb = 100
13/06/23 23:13:26 INFO mapred.MapTask: data buffer = 79691776/99614720
13/06/23 23:13:26 INFO mapred.MapTask: record buffer = 262144/327680
13/06/23 23:13:26 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, 
recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:845)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:541)
at org.
apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
13/06/23 23:13:27 INFO mapred.JobClient:  map 0% reduce 0%
13/06/23 23:13:27 INFO mapred.JobClient: Job complete: job_local_0001
13/06/23 23:13:27 INFO mapred.JobClient: Counters: 0

我认为它无法找到 Mapper 和 reducer 类.我已经在主类中编写了代码,它正在获取默认的 Mapper 和 reducer 类.

I think it is not able to find Mapper and reducer class. I have written the code in main class, It is getting default Mapper and reducer class.

推荐答案

在代码中添加这两行:

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);

您正在使用 TextOutputFormat 默认情况下发出 LongWritable 键和 Text 值,但您将 Text 作为键和 IntWritable 作为值发出.你需要把这件事告诉名人堂.

You are using TextOutputFormat which emits LongWritable key and Text value by default, but you are emitting Text as key and IntWritable as value. You need to tell this to the famework.

HTH

这篇关于映射中键的类型不匹配:预期 org.apache.hadoop.io.Text,收到 org.apache.hadoop.io.LongWritable的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆