Hadoop中的定制分区程序错误java.lang.NoSuchMethodException: - < init>() [英] custom partitioner in Hadoop error java.lang.NoSuchMethodException:- <init>()

查看:200
本文介绍了Hadoop中的定制分区程序错误java.lang.NoSuchMethodException: - < init>()的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图让一个自定义分区器将每个唯一密钥分配给一个reducer。这是默认的 HashPartioner 失败
替代hadoop提供的默认hashpartioner



我不断收到以下错误。这与构造函数没有收到它的论据有关,从我从一些研究中可以看出来。但在这种情况下,使用hadoop,不是框架自动传递的参数吗?我无法在代码中找到错误

  18/04/20 17:06:51信息mapred.JobClient:Task Id:尝试_201804201340_0007_m_000000_1,状态:FAILED 
java.lang.RuntimeException:java.lang.NoSuchMethodException:biA3pipepart $ parti。< init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils。 java:131)
at org.apache.hadoop.mapred.MapTask $ NewOutputCollector。< init>(MapTask.java:587)

这是我的分区器:

  public class Parti extends Partitioner< Text,Text> {
String partititonkey;
int result = 0;
@Override
public int getPartition(Text key,Text value,int numPartitions){


String partitionKey = key.toString(); (分区钥匙> = 9){
if(partitionKey.charAt(0)=='0'){
if(partitionKey.charAt(2)== '0')
result = 0;
else
if(partitionKey.charAt(2)=='1')
result = 1;
else
result = 2;
} else

if(partitionKey.charAt(0)=='1'){
if(partitionKey.charAt(2)=='0')
结果= 3;
else
if(partitionKey.charAt(2)=='1')
result = 4;
else
result = 5;
} else
if(partitionKey.charAt(0)=='2'){
if(partitionKey.charAt(2)=='0')
result = 6 ;
else
if(partitionKey.charAt(2)=='1')
result = 7;
else
result = 8;
}


} //
else

result = 0;

返回结果;
} //关闭方法
} //关闭类

我的映射器签名

  public static class JoinsMap extends Mapper< LongWritable,Text,Text,Text> {
public void Map(LongWritable key ,Text value,Context context)throws IOException,InterruptedException {

我的简化器签名

  public static class JoinsReduce extends Reducer< Text,Text,Text,Text> {
public void Reduce(Text key,Iterable< Text> values ,Context context)抛出IOException,InterruptedException {

主类:

  public static void main(String [] args)throws Exception {




配置conf1 = new Configuration();


工作job1 =新工作(conf1,biA3pipepart);

job1.setJarByClass(biA3pipepart.class);

job1.setNumReduceTasks(9); // ***


job1.setOutputKeyClass(Text.class);
job1.setOutputValueClass(Text.class);

job1.setMapperClass(JoinsMap.class);
job1.setReducerClass(JoinsReduce.class);

job1.setInputFormatClass(TextInputFormat.class);
job1.setOutputFormatClass(TextOutputFormat.class);

job1.setPartitionerClass(Parti.class); // +++
//输入映射。
FileInputFormat.addInputPath(job1,new Path(args [0]));

//减速器的单个输出。
FileOutputFormat.setOutputPath(job1,new Path(args [1]));

job1.waitForCompletion(true);

$ b}

Mapper发出的键如下:

  0,0 
0,1
0,2
1,0
1,1
1,2
2,0
2,1
2,2

,并且Reducer只写入它接收的键和值。

解决方案

/ p>

我为 Parti 类添加了 static 类, mapper和reducer类,如comment(user238607)所示。

  public static class Parti extends Partitioner< Text,Text> {


I am trying to make a custom partitioner to allocate each unique key to a single reducer. this was after the default HashPartioner failed Alternative to the default hashpartioner provided with hadoop

I keep getting the following error. It has something to do with the constructor not receiving its arguments, from what I can tell from doing some research. but in this context, with hadoop, aren't the arguments passed automatically by the framework? I cant find an error in the code

18/04/20 17:06:51 INFO mapred.JobClient: Task Id : attempt_201804201340_0007_m_000000_1, Status : FAILED
java.lang.RuntimeException: java.lang.NoSuchMethodException: biA3pipepart$parti.<init>()
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:587)

This is my partitioner:

 public class Parti extends Partitioner<Text, Text> {
    String partititonkey;
    int result=0;
    @Override
    public int getPartition(Text key, Text value, int numPartitions) {


 String partitionKey = key.toString();

     if(numPartitions >= 9){
               if(partitionKey.charAt(0) =='0' ){
        if(partitionKey.charAt(2)=='0' )
                result= 0;
        else
        if(partitionKey.charAt(2)=='1' )
                result= 1;
        else
                result= 2;
             }else

        if(partitionKey.charAt(0)=='1'){
        if(partitionKey.charAt(2)=='0' )
                result= 3;
        else
        if(partitionKey.charAt(2)=='1' )
                result= 4;
        else
                result= 5;
             }else
        if(partitionKey.charAt(0)=='2' ){
        if(partitionKey.charAt(2)=='0' )
                result= 6;
        else
            if(partitionKey.charAt(2)=='1' )
                    result= 7;
            else
                    result= 8;
             }


        } //
    else

            result= 0; 

   return result;
}// close method
}// close class

My mapper signature

public static class JoinsMap extends Mapper<LongWritable,Text,Text,Text>{
    public void Map(LongWritable key, Text value, Context context) throws IOException, InterruptedException{

My reducer signiture

public static class JoinsReduce extends Reducer<Text,Text,Text,Text>{
public void Reduce (Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

main class:

public static void main( String[] args ) throws Exception {




     Configuration conf1 = new Configuration();


     Job job1 = new Job(conf1, "biA3pipepart");

    job1.setJarByClass(biA3pipepart.class);

    job1.setNumReduceTasks(9);//***


     job1.setOutputKeyClass(Text.class);
     job1.setOutputValueClass(Text.class);

     job1.setMapperClass(JoinsMap.class);
     job1.setReducerClass(JoinsReduce.class);

     job1.setInputFormatClass(TextInputFormat.class);
     job1.setOutputFormatClass(TextOutputFormat.class);

    job1.setPartitionerClass(Parti.class); //+++
    // inputs to  map.
        FileInputFormat.addInputPath(job1, new Path(args[0]));   

     // single output from reducer.
     FileOutputFormat.setOutputPath(job1, new Path(args[1]));

     job1.waitForCompletion(true);


}

keys emitted by Mapper are the following:

0,0
0,1
0,2
1,0
1,1
1,2
2,0
2,1
2,2

and the Reducer only writes keys and values it receives.

解决方案

SOLVED

I just added static to my Parti class like the mapper and reducer classes as suggested by comment (user238607).

public static class Parti extends Partitioner<Text, Text> {

这篇关于Hadoop中的定制分区程序错误java.lang.NoSuchMethodException: - &lt; init&gt;()的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆