带有Hadoop的java.lang.VerifyError [英] java.lang.VerifyError with Hadoop

查看:106
本文介绍了带有Hadoop的java.lang.VerifyError的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用Hadoop的java项目中工作,我有一个java.lang.VerifyError,我不知道如何解决它。我看到有同样类型的问题,但没有答案或解决方案不在我的情况下工作的人。



我的课程:

  import java.io.IOException; 
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;

导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;

public class GetStats {

public static List< Statistique>统计; // class with one String an one int

public static class TokenizerMapper extends
Mapper< Object,Text,Text,IntWritable> {

private static static IntWritable one = new IntWritable(1);
私人文字=新文字();
$ b $ public void map(Object key,Text value,Context context)
throws IOException,InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while(itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word,one);



$ b public static class IntSumReducer extends
Reducer< Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
$ b $ public void reduce(Text key,Iterable< IntWritable> values,
Context context)throws IOException,InterruptedException {
int sum = 0; (IntWritable val:values)
{
sum + = val.get();
}
result.set(sum);
if(key.toString()。contains(HEAD)
|| key.toString()。contains(POST)
|| key.toString()。contains GET)
|| key.toString()。contains(OPTIONS)
|| key.toString()。contains(CONNECT))
GetStats.stats.add (new Statistique(key.toString()。replace(\,),sum));
context.write(key,result);
}
}

public static void main(String [] args)throws Exception {
System.out.println(Start wc);
stats = new ArrayList<>();
$ b $ // File file = new File(err.txt);
// FileOutputStream fos = new FileOutputStream(file);
// PrintStream ps = new PrintStream fos);
// System.setErr(ps);

Configuration conf = new Configuration();
Job job = Job.getInstance(conf,word count) ;
job.setJarByClass(GetStats.cla SS);
job.setMapperClass(TokenizerMapper.class);
// job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job,new Path(input));
job.setOutputFormatClass(NullOutputFormat.class);

job.waitForCompletion(true);

System.out.println(stats);
System.out.println(End);


$ / code $ / pre>

和错误:

 线程main中的异常java.lang.VerifyError:操作数堆栈上的错误类型
异常详细信息:
位置:
org / apache / hadoop / mapred / JobTrackerInstrumentation.create(Lorg / apache / hadoop / mapred / JobTracker; Lorg / apache / hadoop / mapred / JobConf;)Lorg / apache / hadoop / mapred / JobTrackerInstrumentation; @ 5:invokestatic
原因:
不能将'org / apache / hadoop / metrics2 / lib / DefaultMetricsSystem'(当前帧,栈[2])分配给'org / apache / hadoop / metrics2 / MetricsSystem'
当前框架:
bci:@ 5
标志:{}
当地人:{'org / apache / hadoop / mapred / JobTracker','org / apache / hadoop / mapred / JobConf'}
stack:{'org / apache / hadoop / mapred / JobTracker','org / apache / hadoop / mapred / JobConf','org / apache / hadoop / metrics2 / lib / DefaultMetricsSystem' }
字节代码:
0000000:2a2b b200 03b8 0004 b0

at org.apache.hadoop.mapred.LocalJobRunner。< init>(LocalJobRunner.java:573)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:494)
at org.apache.hadoop.mapred.JobClient。< init>(JobClient.java:479)
at org.apache.hadoop.mapreduce.Job $ 1.run(Job.java:563)
at java.security.AccessController.doPrivileged(Native Method)
at javax.secu rity.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
位于org.apache.hadoop.mapreduce。 Job.connect(Job.java:561)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:549)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(在$ hadoop.GetStats.main(GetStats.java:79)



<你有什么想法吗?如果你需要更多的东西来帮助我才问。

解决方案

我解决了我的问题。 b $ b

导入的jar很好,但是我之前尝试过的另一个版本(可能更老)也在项目文件夹中。当我给班级打电话时,看起来使用了较旧版本的jar版本。而且,那个罐子在我想要的班级路径之前。我从项目文件夹中删除了较旧的jar,它工作。


I'm working in a java project using Hadoop and I have a java.lang.VerifyError and I don't know how to resolve it. I saw people with the same type of question but without answer or the solution are not working in my case.

My class :

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;

public class GetStats {

    public static List<Statistique> stats; // class with one String an one int

    public static class TokenizerMapper extends
            Mapper<Object, Text, Text, IntWritable> {

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context)
                throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer extends
            Reducer<Text, IntWritable, Text, IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values,
                Context context) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            if (key.toString().contains("HEAD")
                    || key.toString().contains("POST")
                    || key.toString().contains("GET")
                    || key.toString().contains("OPTIONS")
                    || key.toString().contains("CONNECT"))
                GetStats.stats.add(new Statistique(key.toString().replace("\"", ""), sum));
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        System.out.println("Start wc");
        stats = new ArrayList<>();

//      File file = new File("err.txt");
//      FileOutputStream fos = new FileOutputStream(file);
//      PrintStream ps = new PrintStream(fos);
//      System.setErr(ps);

        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(GetStats.class);
        job.setMapperClass(TokenizerMapper.class);
//      job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path("input"));
        job.setOutputFormatClass(NullOutputFormat.class);

        job.waitForCompletion(true);

        System.out.println(stats);
        System.out.println("End");
    }
}

and the error :

Exception in thread "main" java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
    org/apache/hadoop/mapred/JobTrackerInstrumentation.create(Lorg/apache/hadoop/mapred/JobTracker;Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/JobTrackerInstrumentation; @5: invokestatic
  Reason:
    Type 'org/apache/hadoop/metrics2/lib/DefaultMetricsSystem' (current frame, stack[2]) is not assignable to 'org/apache/hadoop/metrics2/MetricsSystem'
  Current Frame:
    bci: @5
    flags: { }
    locals: { 'org/apache/hadoop/mapred/JobTracker', 'org/apache/hadoop/mapred/JobConf' }
    stack: { 'org/apache/hadoop/mapred/JobTracker', 'org/apache/hadoop/mapred/JobConf', 'org/apache/hadoop/metrics2/lib/DefaultMetricsSystem' }
  Bytecode:
    0000000: 2a2b b200 03b8 0004 b0                

    at org.apache.hadoop.mapred.LocalJobRunner.<init>(LocalJobRunner.java:573)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:494)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479)
    at org.apache.hadoop.mapreduce.Job$1.run(Job.java:563)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapreduce.Job.connect(Job.java:561)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:549)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
    at hadoop.GetStats.main(GetStats.java:79)

Do you have any idea ? if you need something more to help me just ask.

解决方案

I solved my problem.

The imported jar was good, but another version (probably older one) which I had tried earlier, was also in the project folder. When I called the class, it appears that older version of the jar in was used. Also, that jar was before the one I wanted in the class path. I deleted the older jar from the project folder and it worked.

这篇关于带有Hadoop的java.lang.VerifyError的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆