在Hadoop中找不到类异常 [英] class not found exception in Hadoop

查看:258
本文介绍了在Hadoop中找不到类异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图运行一个hadoop单位程序的wordcount,我这样做在Windows 10 64位和Cygwin,这是我使用的程序:

I'm trying to run a hadoop single unit program for wordcount, I'm doing this on windows 10 64 bit and on Cygwin, this is the program I'm using:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {
   public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>
   {
      private final static IntWritable one = new IntWritable(1);
      private Text word = new Text();

      public void map(Object key, Text value, Context context) throws IOException, InterruptedException 
      {
         StringTokenizer itr = new StringTokenizer(value.toString());
         while (itr.hasMoreTokens()) 
         {
            word.set(itr.nextToken());
            context.write(word, one);
         }
      }
   }

   public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> 
   {
      private IntWritable result = new IntWritable();
      public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException 
      {
         int sum = 0;
         for (IntWritable val : values) 
         {
            sum += val.get();
         }
         result.set(sum);
         context.write(key, result);
      }
   }

   public static void main(String[] args) throws Exception 
   {
      Configuration conf = new Configuration();
      Job job = Job.getInstance(conf, "word count");

      job.setJarByClass(WordCount.class);
      job.setMapperClass(TokenizerMapper.class);
      job.setCombinerClass(IntSumReducer.class);
      job.setReducerClass(IntSumReducer.class);

      job.setOutputKeyClass(Text.class);
      job.setOutputValueClass(IntWritable.class);

      FileInputFormat.addInputPath(job, new Path(args[0]));
      FileOutputFormat.setOutputPath(job, new Path(args[1]));

      System.exit(job.waitForCompletion(true) ? 0 : 1);
   }
}

我的类路径如下:

~/hadoop-1.2.1/bin/hadoop jar WordCount.jar hadoop.ProcessUnits input_dir output_dir

我尝试编译程序时收到以下错误消息:

and I get the following error messages when I try to compile the program:

    Exception in thread "main" java.lang.ClassNotFoundException: hadoop.ProcessUnits
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)


推荐答案

问题是 hadoop.ProcessUnits 应该指向 mainClass 您要运行。您的类名为 WordCount ,因此它应该类似:

The problem is that hadoop.ProcessUnits should point to the mainClass you want to run. Your class is called WordCount, so it should look something like:

$ hadoop jar WordCount.jar< PACKAGE> .WordCount input_dir output_dir

您的代码不包括包名称, $ c>< PACKAGE>

Your code doesn't include the package name so I've substituted <PACKAGE>

这篇关于在Hadoop中找不到类异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆