执行中出现Hadoop错误:键入来自map的键不匹配:expected org.apache.hadoop.io.Text,recieved org.apache.hadoop.io.LongWritable [英] Hadoop error in execution : Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

查看:228
本文介绍了执行中出现Hadoop错误:键入来自map的键不匹配:expected org.apache.hadoop.io.Text,recieved org.apache.hadoop.io.LongWritable的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Hadoop上实现了PageRank算法,因为标题说我试图执行代码时遇到了以下错误:



类型不匹配来自map:expected org.apache.hadoop.io.Text,收到org.apache.hadoop.io.LongWritable



在我的输入文件中,我将图形节点标识存储为键以及关于它们的一些信息作为价值。我的输入文件格式如下:

1 \t 3.4,2,5,6,67



4 \t 4.2,77,2,7,83

...
...



试图理解错误说什么我试图使用LongWritable作为我的主变量类型,如下面的代码中所示。这意味着我有:


map< LongWritable,LongWritable,LongWritable,LongWritable>



reduce< LongWritable,LongWritable,LongWritable,LongWritable>



但我也试过:

map<文本,文本,文本,文本>



reduce <文本,文本,文本,文本>



还有:

map< LongWritable,Text,LongWritable,Text>



reduce< LongWritable,Text,LongWritable,Text>



并且我总是想出同样的错误。我想我很难理解错误中期望和接受的方式。这是否意味着我的地图函数期望从我的输入文件中获得LongWritable,并且它具有Text?我使用的输入文件格式或变量类型有问题吗?



以下是完整的代码,您能告诉我要更改哪些内容吗?:

  import java.io.IOException; 
import java.util。*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.lang.Object。*;

import org.apache.commons.cli.ParseException;
import org.apache.commons.lang.StringUtils;
导入org.apache.commons.configuration.Configuration;
import org.apache.hadoop.security.Credentials;
import org.apache.log4j。*;
import org.apache.commons.logging。*;
import org.apache.hadoop.mapred。*;
导入org.apache.hadoop.fs.FileSystem;
导入org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;


$ b public class Pagerank
{


public static class PRMap扩展Mapper< LongWritable,LongWritable,LongWritable,LongWritable> ;
{

public void map(LongWritable lineNum,LongWritable line,OutputCollector< LongWritable,LongWritable> outputCollector,Reporter reporter)throws IOException,InterruptedException
{
if .toString()。length()== 0){
return;
}

Text key = new Text();
Text value = new Text();
LongWritable value = new LongWritable();
StringTokenizer spline = new StringTokenizer(line.toString(),\t);
key.set(spline.nextToken());
value.set(spline.nextToken());

valuel.set(Long.parseLong(value.toString()));
outputCollector.collect(lineNum,valuel);


String info = value.toString();
String splitter [] = info.split(,);

if(splitter.length> = 3)
{
float f = Float.parseFloat(splitter [0]);
float pagerank = f /(​​splitter.length - 2);

for(int i = 2; i< splitter.length; i ++)
{
LongWritable key2 = new LongWritable();
LongWritable value2 = new LongWritable();
long l;

l = Long.parseLong(splitter [i]);
key2.set(l);
//key2.set(splitter[i]);
value2.set((long)f);

outputCollector.collect(key2,value2);




$ b公共静态类PRReduce扩展了Reducer< LongWritable,LongWritable,LongWritable,LongWritable>
{
private Text result = new Text();
public void reduce(LongWritable key,Iterator< LongWritable> values,OutputCollector< LongWritable,LongWritable> results,Reporter reporter)抛出IOException,InterruptedException
{

float pagerank = 0;
字符串allinone =,;
while(values.hasNext())
{
LongWritable temp = values.next();
字符串转换= temp.toString();
String [] splitted = converted.split(,);

if(splitted.length> 1)
{
for(int i = 1; i< splitted.length; i ++)
{
allinone = allinone.concat(splitted [i]);
if(i!= splitted.length - 1)
allinone = allinone.concat(,);
}
}
else
{
float f = Float.parseFloat(splitted [0]);
pagerank = pagerank + f;
}
}
String last = Float.toString(pagerank);
last = last.concat(allinone);

LongWritable value = new LongWritable();
value.set(Long.parseLong(last));

results.collect(key,value);




$ b public static void main(String [] args)throws Exception
{


org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();

String [] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
if(otherArgs.length!= 2){
System.err.println(Usage:wordcount< in>< out>);
System.exit(2);
}

Job job = new Job(conf,pagerank_itr0);

job.setJarByClass(Pagerank.class);
job.setMapperClass(Pagerank.PRMap.class);
job.setReducerClass(Pagerank.PRReduce.class);


job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(LongWritable.class);
FileInputFormat.addInputPath(job,new Path(otherArgs [0]));
FileOutputFormat.setOutputPath(job,new Path(otherArgs [1]));
job.waitForCompletion(true);


$ b


解决方案

您没有在作业配置中设置Mapper输出类。
尝试使用以下方法设置Job中的键和值类:


setMapOutputKeyClass();



setMapOutputValueClass();


I'm implementing a PageRank algorithm on Hadoop and as the title says i came up with the folowing error while trying to execute the code:

Type mismatch in key from map: expected org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.LongWritable

In my Input file I store graph node id's as keys and some info about them as value. My input file has the following format:

1 \t 3.4,2,5,6,67

4 \t 4.2,77,2,7,83

... ...

Trying to understand what the error says I attempt to use LongWritable as my main variable type as you can see in the code below. This means I have:

map< LongWritable,LongWritable,LongWritable,LongWritable >

reduce< LongWritable,LongWritable,LongWritable,LongWritable >

but,I also tried:

map< Text,Text,Text,Text >

reduce< Text,Text,Text,Text >

and also:

map< LongWritable,Text,LongWritable,Text >

reduce< LongWritable,Text,LongWritable,Text >

and I always come up with the same error. I guess I have trouble understanding what expected and received means in the error. Does it means that my map function expected LongWritable from my input file and it got Text? Is there a problem with the format of the input file I use or with the variable types?

Here is the full code, can you tell me what to change and where?:

import java.io.IOException;
import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.lang.Object.*;

import org.apache.commons.cli.ParseException;
import org.apache.commons.lang.StringUtils;
import org.apache.commons.configuration.Configuration;
import org.apache.hadoop.security.Credentials;
import org.apache.log4j.*;
import org.apache.commons.logging.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;



public class Pagerank
{   


public static class PRMap extends Mapper<LongWritable, LongWritable, LongWritable, LongWritable>
{

    public void map(LongWritable lineNum, LongWritable line, OutputCollector<LongWritable, LongWritable> outputCollector, Reporter reporter) throws IOException, InterruptedException
    {
        if (line.toString().length() == 0) {
            return;
        }

        Text key = new Text();
        Text value = new Text();
        LongWritable valuel = new LongWritable();
        StringTokenizer spline = new StringTokenizer(line.toString(),"\t");
        key.set(spline.nextToken());  
        value.set(spline.nextToken());  

        valuel.set(Long.parseLong(value.toString()));
        outputCollector.collect(lineNum,valuel);


        String info = value.toString();
        String splitter[] = info.split(",");

        if(splitter.length >= 3)
        {
            float f = Float.parseFloat(splitter[0]);
            float pagerank = f / (splitter.length - 2);

            for(int i=2;i<splitter.length;i++)
            {
                LongWritable key2 = new LongWritable();
                LongWritable value2 = new LongWritable();
                long l;

                l = Long.parseLong(splitter[i]);
                key2.set(l);
                //key2.set(splitter[i]);
                value2.set((long)f);

                outputCollector.collect(key2, value2);
            }
        }
    }
}

public static class PRReduce extends Reducer<LongWritable,LongWritable,LongWritable,LongWritable>
{
    private Text result = new Text();
    public void reduce(LongWritable key, Iterator<LongWritable> values,OutputCollector<LongWritable, LongWritable> results, Reporter reporter) throws IOException, InterruptedException
    {

        float pagerank = 0;
        String allinone = ",";
        while(values.hasNext())
        {
            LongWritable temp = values.next();
            String converted = temp.toString();
            String[] splitted = converted.split(",");

            if(splitted.length > 1)
            {                   
                for(int i=1;i<splitted.length;i++)
                {
                    allinone = allinone.concat(splitted[i]);
                    if(i != splitted.length - 1)
                        allinone = allinone.concat(",");
                }
            }
            else
            {
                float f = Float.parseFloat(splitted[0]);
                pagerank = pagerank + f;
            }
        }
        String last = Float.toString(pagerank);
        last = last.concat(allinone);

        LongWritable value = new LongWritable();
        value.set(Long.parseLong(last));

        results.collect(key, value);
    }      
}



public static void main(String[] args) throws Exception
{      


    org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();

    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }

    Job job = new Job(conf, "pagerank_itr0");

    job.setJarByClass(Pagerank.class);      
    job.setMapperClass(Pagerank.PRMap.class);       
    job.setReducerClass(Pagerank.PRReduce.class);    


    job.setOutputKeyClass(LongWritable.class);            
    job.setOutputValueClass(LongWritable.class);            
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    job.waitForCompletion(true);

}
}

解决方案

You are not setting the Mapper Output Classes in the Job Configuration. try setting the key and value classes from the Job, using the methods:

setMapOutputKeyClass();

setMapOutputValueClass();

这篇关于执行中出现Hadoop错误:键入来自map的键不匹配:expected org.apache.hadoop.io.Text,recieved org.apache.hadoop.io.LongWritable的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆