无法运行jar文件,因为“无法找到或加载主类” [英] Cannot run jar file because of "Could not find or load main class"

查看:146
本文介绍了无法运行jar文件,因为“无法找到或加载主类”的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个测试目录,其中包含3个文件夹:



   -  META-INF:
一个文件MANIFEST.MF在目录中:
MANIFEST.MF:
anifest-Version:1.0
创建者:1.7.0_04 -ea(Oracle Corporation)
Class-Path:lib / *; 。
Main-Class:Setup.WordCount
- lib:
我需要的所有外部jar文件
- 安装程序:
dir中的3个文件:
WordCount $ IntSumReducer.clas
WordCount $ TokenizerMapper.class
WordCount.class

我使用命令创建了一个jar文件
$ b

  jar cmf test.jar test / META-INF / MANIFEST.MF test / Setup test / lib 

但是当我尝试运行 test.jar ,报错:

 错误:无法找到或加载主类Setup.WordCount 

我试图调试prob一整天,仍然不知道!



WordCount.java

  package Setup; 

import java.io.IOException;
import java.util.StringTokenizer;

导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

public static class TokenizerMapper
extends Mapper< Object,Text,Text,IntWritable> {

private final static IntWritable one = new IntWritable(1);
私人文字=新文字();

public void map(Object key,Text value,Context context
)throws IOException,InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while(itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word,one);




public static class IntSumReducer
extends Reducer< Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
$ b public void reduce(Text key,Iterable< IntWritable> values,
Context context
)throws IOException,InterruptedException {
int sum = 0; (IntWritable val:values)
{
sum + = val.get();
}
result.set(sum);
context.write(key,result);



public static void main(String [] args)throws Exception {
Configuration conf = new Configuration();
String [] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
if(otherArgs.length!= 2){
System.err.println(Usage:wordcount< in>< out>);
System.exit(2);
}
工作工作=新工作(conf,字数);
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job,new Path(otherArgs [0]));
FileOutputFormat.setOutputPath(job,new Path(otherArgs [1]));
System.exit(job.waitForCompletion(true)?0:1);


$ / code $ / pre

代码与<$ c $完全相同c> hadoop / example / WordCount ,我试图在我的本地开发环境中实现hadoop示例。

>你的jar语句稍微偏离。试试这个:

  jar -cfm test.jar测试/ META-INF / MANIFEST.MF -C测试设置-C测试lib 

您的命令将test / Setup / WordCount.class放在jar文件中,这就是为什么Java找不到安装程序/WordCount.class



您还遗漏了您发布的代码中的包装声明:

 软件包安装程序; 


I have a test dir, which contains 3 folders:

--META-INF:
  one file MANIFEST.MF in the dir:
  MANIFEST.MF:
      anifest-Version: 1.0
      Created-By: 1.7.0_04-ea (Oracle Corporation)
      Class-Path: lib/*; .
      Main-Class: Setup.WordCount
-- lib:
   all the external jars I need for the project
-- Setup:
   3 files in the dir:
   WordCount$IntSumReducer.clas
   WordCount$TokenizerMapper.class
   WordCount.class

I create a jar file using the command

jar cmf test.jar test/META-INF/MANIFEST.MF test/Setup test/lib

but when I try to run the test.jar, an error reported:

Error: Could not find or load main class Setup.WordCount

I’ve tried to debug the prob for the whole day, still no idea!

WordCount.java:

package Setup;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

The code is exactly the same as hadoop/example/WordCount, I’m trying to implementing hadoop example on my local dev environment.

解决方案

Your jar statement is slightly off. Try this:

jar -cfm test.jar test/META-INF/MANIFEST.MF -C test Setup -C test lib

Your command puts test/Setup/WordCount.class in the jar, which is why Java cannot find Setup/WordCount.class

You are also missing the package statement in the code you posted:

package Setup;

这篇关于无法运行jar文件,因为“无法找到或加载主类”的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆