错误的关键类:文本不是IntWritable [英] Wrong key class: Text is not IntWritable
问题描述
这可能看起来像一个愚蠢的问题,但我没有在我的mapreduce代码中看到hadoop中的类型问题
This may seem like a stupid question, but I fail to see the problem in my types in my mapreduce code for hadoop
正如问题中所述,问题是它预计IntWritable,但我将它传递给Reducer的collector.collect中的一个Text对象。
As stated in the question the problem is that it is expecting IntWritable but I'm passing it a Text object in the collector.collect of the reducer.
我的作业配置具有以下映射器输出类: p>
My job configuration has the following mapper output classes:
conf.setMapOutputKeyClass(IntWritable.class);
conf.setMapOutputValueClass(IntWritable.class);
以下reducer输出类:
And the following reducer output classes:
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
我的映射类具有以下定义:
My mapping class has the following definition:
public static class Reduce extends MapReduceBase implements Reducer<IntWritable, IntWritable, Text, IntWritable>
包含所需的功能:
with the required function:
public void reduce(IntWritable key, Iterator<IntWritable> values, OutputCollector<Text,IntWritable> output, Reporter reporter)
然后在我打电话时失败:
And then it fails when I call:
output.collect(new Text(),new IntWritable());
我对映射reduce相当陌生,但所有类型都匹配,但编译失败该行表示其期望IntWritable成为减少课程的关键。如果它很重要我使用0.21版本的Hadoop
I'm fairly new to map reduce but all the types seem to match, it compiles but then fails on that line saying its expecting an IntWritable as the key for the reduce class. If it matters I'm using 0.21 version of Hadoop
这是我的地图类:
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, IntWritable, IntWritable> {
private IntWritable node = new IntWritable();
private IntWritable edge = new IntWritable();
public void map(LongWritable key, Text value, OutputCollector<IntWritable, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
node.set(Integer.parseInt(tokenizer.nextToken()));
edge.set(Integer.parseInt(tokenizer.nextToken()));
if(node.get() < edge.get())
output.collect(node, edge);
}
}
}
和我的缩减课程:
public static class Reduce extends MapReduceBase implements Reducer<IntWritable, IntWritable, Text, IntWritable> {
IntWritable $ = new IntWritable(Integer.MAX_VALUE);
Text keyText = new Text();
public void reduce(IntWritable key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
ArrayList<IntWritable> valueList = new ArrayList<IntWritable>();
//outputs original edge pair as key and $ for value
while (values.hasNext()) {
IntWritable value = values.next();
valueList.add(value);
keyText.set(key.get() + ", " + value.get());
output.collect(keyText, $);
}
//outputs all the 2 length pairs
for(int i = 0; i < valueList.size(); i++)
for(int j = i+1; i < valueList.size(); j++)
output.collect(new Text(valueList.get(i).get() + ", " + valueList.get(j).get()), key);
}
}
和我的工作配置:
JobConf conf = new JobConf(Triangles.class);
conf.setJobName("mapred1");
conf.setMapOutputKeyClass(IntWritable.class);
conf.setMapOutputValueClass(IntWritable.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path("mapred1"));
JobClient.runJob(conf);
推荐答案
您的问题是您将Reduce类设置为combiner
Your problem is that you set the Reduce class as a combiner
conf.setCombinerClass(Reduce.class);
Combiners在map阶段运行,它们需要发出相同的键/值类型(IntWriteable,IntWritable在你的情况下)
删除这一行,你应该没问题
Combiners run in the map phase and they need to emit the same key/value type (IntWriteable, IntWritable in your case) remove this line and you should be ok
这篇关于错误的关键类:文本不是IntWritable的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!