作业提交后的ClassNotFoundException [英] ClassNotFoundException after job submission

查看:138
本文介绍了作业提交后的ClassNotFoundException的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试使用 Spring Data - Hadoop 来执行MR代码远程集群从我的本地机器的IDE



// Hadoop 1.1.2,Spring 3.2.4,Spring-Data-Hadoop 1.0.0

试用这些版本:

Hadoop 1.2.1,Spring 4.0.1,Spring-Data-Hadoop 2.0.2 strong>

applicationContext.xml

 <?xml version =1.0encoding =UTF-8?> 
< beans xmlns =http://www.springframework.org/schema/beans
xmlns:xsi =http://www.w3.org/2001/XMLSchema-instancexmlns :hdp =http://www.springframework.org/schema/hadoop
xmlns:context =http://www.springframework.org/schema/context
xsi:schemaLocation = http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/hadoop http ://www.springframework.org/schema/hadoop/spring-hadoop.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring -context-3.2.xsd>

< context:property-placeholder location =resources / hadoop.properties/>


< / hdp:configuration>

< hdp:job id =wc-jobmapper =com.hadoop.basics.WordCounter.WCMapper
reducer =com.hadoop.basics.WordCounter.WCReducer input-path =$ {wordcount.input.path}
output-path =$ {wordcount.output.path}user =bigdata>
< / hdp:job>

< hdp:job-runner id =myjobs-runnerjob-ref =wc-job
run-at-startup =true/>

< hdp:resource-loader id =resourceLoaderuri =$ {hd.fs}
user =bigdata/>
< / beans>

WordCounter.java

  package com.hadoop.basics; 

import java.io.IOException;
import java.util.StringTokenizer;

导入org.apache.hadoop.conf.Configured;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.springframework.context.support.AbstractApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class WordCounter {

private static IntWritable one = new IntWritable(1);

public static class WCMapper扩展Mapper< Text,Text,Text,IntWritable> {
$ b $ @覆盖
保护无效映射(
文本键,
文本值,
org.apache.hadoop.mapreduce.Mapper<文本,文本,Text,IntWritable> .Context context)
throws IOException,InterruptedException {
// TODO自动生成的方法存根
StringTokenizer strTokenizer = new StringTokenizer(value.toString());
文本标记= new Text(); $()
$ b while(strTokenizer.hasMoreTokens()){
token.set(strTokenizer.nextToken());
context.write(token,one);



$ b public static class WCReducer extends
Reducer< Text,IntWritable,Text,IntWritable> {
@Override
protected void reduce(
Text key,
Iterable< IntWritable> values,
org.apache.hadoop.mapreduce.Reducer< Text,IntWritable, Text,IntWritable> .Context上下文)
抛出IOException,InterruptedException {
// TODO自动生成的方法存根

int sum = 0; $(intWritable value:values)

{
sum + = value.get();
}

context.write(key,new IntWritable(sum));



public static void main(String [] args){
AbstractApplicationContext context = new ClassPathXmlApplicationContext(
applicationContext.xml,WordCounter 。类);
System.out.println(Word Count Application Running);
context.registerShutdownHook();


code
$ b $ p
$输出是:

  2013年8月23日上午11时07分48秒org.springframework.context.support.AbstractApplicationContext prepareRefresh 
INFO:刷新org.springframework.context。 support.ClassPathXmlApplicationContext@1815338:启动日期[Fri Aug 23 11:07:48 IST 2013];上下文层次结构的根
Aug 23,2013 11:07:48 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO:从类路径资源加载XML bean定义[com / hadoop / basics /applicationContext.xml]
2013年8月23日上午11时07分48秒org.springframework.core.io.support.PropertiesLoaderSupport loadProperties
INFO:从类路径资源加载属性文件[resources / hadoop.properties ]
2013年8月23日上午11时07分48秒org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO:预先实例化org.springframework.beans.factory.support.DefaultListableBeanFactory中的单例@ 7c197e:定义bean [org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,hadoopConfiguration,wc-job,myjobs-runner,resourceLoader]; org.springframework.data.hadoop.mapreduce.JobExecutor $ 2 run
INFO:开始工作[wc-job]
8月23日,2013 11:07:49 AM org.apache.hadoop.mapred.JobClient copyAndConfigureFiles
警告:没有工作jar文件集。用户类可能找不到。请参阅JobConf(Class)或JobConf#setJar(String)。
2013年8月23日上午11时07分49秒org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO:进程的总输入路径:1
2013年8月23日11 :07:50 AM org.apache.hadoop.util.NativeCodeLoader< clinit>
警告:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类
2013年8月23日上午11:07:50 org.apache.hadoop.io.compress。 snappy.LoadSnappy< clinit>
警告:Snappy本地库未加载
2013年8月23日上午11时07分52秒org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:正在运行的作业:job_201308231532_0002
8月org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:map 0%reduce 0%
2013年8月23日上午11:08:12 org.apache.hadoop .mapred.JobClient monitorAndPrintJob
信息:任务ID:attempt_201308231532_0002_m_000000_0,状态:FAILED
java.lang.RuntimeException:java.lang.ClassNotFoundException:com.hadoop.basics.WordCounter $ WCMapper
at org .apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop .mapred.MapTask.runNewMapper(MapTask.java:719)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child $ 4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Metho d)
在javax.security.auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
在org.apache.hadoop.mapred.Child.main(Child.java:249)
由java.net产生:java.lang.ClassNotFoundException:com.hadoop.basics.WordCounter $ WCMapper
。 URLClassLoader $ 1.run(URLClassLoader.java:366)java.net.URLClassLoader
$ 1.run(URLClassLoader.java:355)$ java.util.AccessController.doPrivileged(本地方法)

在java.net.URLClassLoader.findClass(URLClassLoader.java:354)$ b $在java.lang.ClassLoader.loadClass(ClassLoader.java:423)
在sun.misc.Launcher $ AppClassLoader.loadClass(在java.lang.ClassLoader.loadClass中
(ClassLoader.java:356)在java.lang.Class.forName0中
(本地方法)在java.lang中
。 Class.forName(Class.java:264)
在org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
a t org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
... 8 more

2013年8月23日上午11:08:33 org.apache。 hadoop.mapred.JobClient monitorAndPrintJob
INFO:Task Id:attempt_201308231532_0002_m_000000_1,Status:FAILED
java.lang.RuntimeException:java.lang.ClassNotFoundException:com.hadoop.basics.WordCounter $ WCMapper
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache。 hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)$ or $ b $ org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred。 Child $ 4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
在org.apache.hadoop .mapred.Child.main(Child.java:249)
导致:java.lang.ClassNotFoundException:com.hadoop.basics.WordCounter $ WCMapper $ b $ java.net.URLClassLoader $ 1.run(URLClassLoader .java:366)在java.net.URLClassLoader上
$ 1.run(URLClassLoader.java:355)$ java.util.AccessController.doPrivileged(Native方法)
在java.net处
。 $ URBlassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher $ AppClassLoader.loadClass(Launcher.java:308)在java.lang.ClassLoader.loadClass中
(ClassLoader.java:356)$在java.lang.Class.forName0中
(本地方法)$ b $在java.lang.Class.forName(Class。
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
... 8 more

2013年8月23日上午11:08:51 org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:任务ID:attempt_201308231532_0002_m_000000_2,状态:FAILED
java.lang.RuntimeException:java.lang.ClassNotFoundException:com.hadoop.basics.WordCounter $ WCMapper $ b $ org.apache.hadoop.conf.Configuration.getClass(
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:Configuration.java:849)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java: 719)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child $ 4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation .doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
引起:java.lang.ClassNotFoundException:com.hadoop.basics .WordCounter $ WCMapper $ b $ java.net.URLClassLoader $ 1.run(URLClassLoader.java:366)$在java.net.URLClassLoader中为$ b.run(URLClassLoader.java:355)$ java.util.AccessController.doPrivileged中的b $ b(本地方法)$ java.net.URLClassLoader.findClass中的
(URLClassLoader。 java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher $ AppClassLoader.loadClass(Launcher.java:308)
at java .lang.ClassLoader.loadClass(ClassLoader.java:356)$ java.util.Class.forName0中的
(本地方法)$ b $ java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
.. 。8 more

2013年8月23日上午11:09:24 org.apache.hadoop.mapred.JobClient monitorAndPrintJob
信息:作业完成:job_201308231532_0002
2013年8月23日11 :09:24 AM org.apache.hadoop.mapred.Counters log
INFO:Counters:7
2013年8月23日上午11:09:24 org.apache.hadoop.mapred.Counters日志
INFO:作业计数器
2013年8月23日上午11时09分24秒org.apache.hadoop.mapred.Counters日志
信息:SLOTS_MILLIS_MAPS = 86688
2013年8月23日上午11:09:24 org.apache.hadoop.mapred.Counters log
INFO:所有花费的时间减少,预留插槽后减少等待时间(毫秒)= 0
2013年8月23日11时09分24秒AM org.apache.hadoop.mapred.Counters log
INFO:预留插槽后等待的所有地图花费的总时间(毫秒)= 0
2013年8月23日上午11:09:24 org.apache。 hadoop.mapred.Counters log
INFO:启动的map任务= 4
2013年8月23日上午11:09:24 org.apache.hadoop.mapred.Counters log
INFO:Data-local map tasks = 4
2013年8月23日上午11时09分24秒org.apache.hadoop.mapred.Counters log
信息:SLOTS_MILLIS_REDUCES = 0
2013年8月23日11时09分24秒AM org.apache.hadoop.mapred.Counters log
INFO:Failed map tasks = 1
2013年8月23日上午11:09:24 org.springframework.data.hadoop.mapreduce.JobExecutor $ 2 run
INFO:已完成的工作[wc-job]
2013年8月23日1 1:09:24 AM org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons
INFO:破坏org.springframework.beans.factory.support.DefaultListableBeanFactory@7c197e中的单例:定义bean [org.springframework.context。 support.PropertySourcesPlaceholderConfigurer#0,hadoopConfiguration,WC-作业,MyJobs的浇道,资源加载];工厂层次结构的根
线程main中的异常org.springframework.beans.factory.BeanCreationException:创建名为'myjobs-runner'的bean时出错:init方法的调用失败;嵌套异常是java.lang.IllegalStateException:Job wc-job]启动失败; status = FAILED
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1482)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java :521)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
at org.springframework.beans.factory.support.AbstractBeanFactory $ 1.getObject(AbstractBeanFactory。
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory。 java:292)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListab leBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:628)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:932)
at org.springframework.context.support.AbstractApplicationContext.refresh( AbstractApplicationContext.java:479)
at org.springframework.context.support.ClassPathXmlApplicationContext。< init>(ClassPathXmlApplicationContext.java:197)
at org.springframework.context.support.ClassPathXmlApplicationContext。< init> ;(ClassPathXmlApplicationContext.java:172)
at org.springframework.context.support.ClassPathXmlApplicationContext。< init>(ClassPathXmlApplicationContext.java:158)
at com.hadoop.basics.WordCounter.main(WordCounter .java:58)
导致:java.lang.IllegalStateException:作业wc-job]启动失败; status = FAILED
at org.springframework.data.hadoop.mapreduce.JobExecutor $ 2.run(JobExecutor.java:219)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java: 49)
at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:168)
at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java: 160)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:52)
at org.springframework.data.hadoop.mapreduce.JobRunner.afterPropertiesSet(JobRunner.java: 44)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1541)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java: 1479)
... 13更多

什么配置。我错过了吗?是否真的有可能使用Spring Data远程提交Hadoop作业而不创建jar等? 然后分离映射器和reducer类的工作,并在 applicationContext.xml 中完成以下更改:

 <?xml version =1.0encoding =UTF-8?> 
< beans xmlns =http://www.springframework.org/schema/beans
xmlns:xsi =http://www.w3.org/2001/XMLSchema-instancexmlns :util =http://www.springframework.org/schema/util
xmlns:context =http://www.springframework.org/schema/context
xmlns:hdp = http://www.springframework.org/schema/hadoopxmlns:batch =http://www.springframework.org/schema/batch
xsi:schemaLocation =
http:// www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/hadoop http:// www .springframework.org / schema / hadoop / spring-hadoop.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd
http://www.springframework.org/ schema / util http://www.springframework.org/schema/util/spring-util-4.2.xsd\">

< context:property-placeholder location =classpath:application.properties/>
security-method =curbuser- keytab =locationrm-manager-principal =username
user-principal =username>
fs.default.name = $ {fs.default.name}
mapred.job.tracker = $ {mapred.job.tracker}
< / hdp:configuration>

output-path =$ {output.path}jar-by- class =com.xx.poc.Application
mapper =com.xx.poc.Mapreducer =com.xx.poc.Reduce/>

< hdp:job-runner id =wordCountJobRunnerjob-ref =wordCountJobId
run-at-startup =true/>
< / beans>


I'm trying out Spring Data - Hadoop for executing the MR code on a remote cluster from my local machine's IDE

//Hadoop 1.1.2, Spring 3.2.4, Spring-Data-Hadoop 1.0.0

Tried with these versions :

Hadoop 1.2.1, Spring 4.0.1, Spring-Data-Hadoop 2.0.2

applicationContext.xml :

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:hdp="http://www.springframework.org/schema/hadoop"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd 
    http://www.springframework.org/schema/hadoop http://www.springframework.org/schema/hadoop/spring-hadoop.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd">

    <context:property-placeholder location="resources/hadoop.properties" />

    <hdp:configuration file-system-uri="${hd.fs}" job-tracker-uri="${hd.jobtracker.uri}">

    </hdp:configuration>

    <hdp:job id="wc-job" mapper="com.hadoop.basics.WordCounter.WCMapper"
        reducer="com.hadoop.basics.WordCounter.WCReducer" input-path="${wordcount.input.path}"
        output-path="${wordcount.output.path}" user="bigdata">
    </hdp:job>

    <hdp:job-runner id="myjobs-runner" job-ref="wc-job"
        run-at-startup="true" />

    <hdp:resource-loader id="resourceLoader" uri="${hd.fs}"
        user="bigdata" />   
</beans>

WordCounter.java :

    package com.hadoop.basics;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.springframework.context.support.AbstractApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class WordCounter {

    private static IntWritable one = new IntWritable(1);

    public static class WCMapper extends Mapper<Text, Text, Text, IntWritable> {

        @Override
        protected void map(
                Text key,
                Text value,
                org.apache.hadoop.mapreduce.Mapper<Text, Text, Text, IntWritable>.Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stub
            StringTokenizer strTokenizer = new StringTokenizer(value.toString());
            Text token = new Text();

            while (strTokenizer.hasMoreTokens()) {
                token.set(strTokenizer.nextToken());
                context.write(token, one);
            }
        }
    }

    public static class WCReducer extends
            Reducer<Text, IntWritable, Text, IntWritable> {
        @Override
        protected void reduce(
                Text key,
                Iterable<IntWritable> values,
                org.apache.hadoop.mapreduce.Reducer<Text, IntWritable, Text, IntWritable>.Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stub

            int sum = 0;

            for (IntWritable value : values) {
                sum += value.get();
            }

            context.write(key, new IntWritable(sum));
        }
    }

    public static void main(String[] args) {
        AbstractApplicationContext context = new ClassPathXmlApplicationContext(
                "applicationContext.xml", WordCounter.class);
        System.out.println("Word Count Application Running");
        context.registerShutdownHook();
    }
}

The output is :

Aug 23, 2013 11:07:48 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@1815338: startup date [Fri Aug 23 11:07:48 IST 2013]; root of context hierarchy
Aug 23, 2013 11:07:48 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [com/hadoop/basics/applicationContext.xml]
Aug 23, 2013 11:07:48 AM org.springframework.core.io.support.PropertiesLoaderSupport loadProperties
INFO: Loading properties file from class path resource [resources/hadoop.properties]
Aug 23, 2013 11:07:48 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@7c197e: defining beans [org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,hadoopConfiguration,wc-job,myjobs-runner,resourceLoader]; root of factory hierarchy
Aug 23, 2013 11:07:49 AM org.springframework.data.hadoop.mapreduce.JobExecutor$2 run
INFO: Starting job [wc-job]
Aug 23, 2013 11:07:49 AM org.apache.hadoop.mapred.JobClient copyAndConfigureFiles
WARNING: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
Aug 23, 2013 11:07:49 AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Aug 23, 2013 11:07:50 AM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Aug 23, 2013 11:07:50 AM org.apache.hadoop.io.compress.snappy.LoadSnappy <clinit>
WARNING: Snappy native library not loaded
Aug 23, 2013 11:07:52 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_201308231532_0002
Aug 23, 2013 11:07:53 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO:  map 0% reduce 0%
Aug 23, 2013 11:08:12 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Task Id : attempt_201308231532_0002_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.hadoop.basics.WordCounter$WCMapper
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
    at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: com.hadoop.basics.WordCounter$WCMapper
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
    ... 8 more

Aug 23, 2013 11:08:33 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Task Id : attempt_201308231532_0002_m_000000_1, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.hadoop.basics.WordCounter$WCMapper
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
    at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: com.hadoop.basics.WordCounter$WCMapper
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
    ... 8 more

Aug 23, 2013 11:08:51 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Task Id : attempt_201308231532_0002_m_000000_2, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.hadoop.basics.WordCounter$WCMapper
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:849)
    at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: com.hadoop.basics.WordCounter$WCMapper
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:802)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:847)
    ... 8 more

Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Job complete: job_201308231532_0002
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 7
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:   Job Counters 
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     SLOTS_MILLIS_MAPS=86688
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     Total time spent by all reduces waiting after reserving slots (ms)=0
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     Total time spent by all maps waiting after reserving slots (ms)=0
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     Launched map tasks=4
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     Data-local map tasks=4
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     SLOTS_MILLIS_REDUCES=0
Aug 23, 2013 11:09:24 AM org.apache.hadoop.mapred.Counters log
INFO:     Failed map tasks=1
Aug 23, 2013 11:09:24 AM org.springframework.data.hadoop.mapreduce.JobExecutor$2 run
INFO: Completed job [wc-job]
Aug 23, 2013 11:09:24 AM org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons
INFO: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@7c197e: defining beans [org.springframework.context.support.PropertySourcesPlaceholderConfigurer#0,hadoopConfiguration,wc-job,myjobs-runner,resourceLoader]; root of factory hierarchy
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'myjobs-runner': Invocation of init method failed; nested exception is java.lang.IllegalStateException: Job wc-job] failed to start; status=FAILED
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1482)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:458)
    at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:295)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:223)
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:292)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:628)
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:932)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:479)
    at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:197)
    at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:172)
    at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:158)
    at com.hadoop.basics.WordCounter.main(WordCounter.java:58)
Caused by: java.lang.IllegalStateException: Job wc-job] failed to start; status=FAILED
    at org.springframework.data.hadoop.mapreduce.JobExecutor$2.run(JobExecutor.java:219)
    at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49)
    at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:168)
    at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:160)
    at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:52)
    at org.springframework.data.hadoop.mapreduce.JobRunner.afterPropertiesSet(JobRunner.java:44)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1541)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1479)
    ... 13 more

What config. have I missed? Is it really possible to submit a Hadoop job remotely using Spring Data without creation of a jar etc. ?

解决方案

I was getting the same issue then separating the mapper and reducer class works and the following changes were done in applicationContext.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:hdp="http://www.springframework.org/schema/hadoop" xmlns:batch="http://www.springframework.org/schema/batch"
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/hadoop http://www.springframework.org/schema/hadoop/spring-hadoop.xsd
     http://www.springframework.org/schema/context  http://www.springframework.org/schema/context/spring-context.xsd
     http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd
    http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.2.xsd">

    <context:property-placeholder location="classpath:application.properties" />
    <hdp:configuration namenode-principal="hdfs://xx.yy.com" rm-manager-uri="xx.yy.com"
        security-method="kerb" user-keytab="location" rm-manager-principal="username"
        user-principal="username">
        fs.default.name=${fs.default.name}
        mapred.job.tracker=${mapred.job.tracker}
    </hdp:configuration>

    <hdp:job id="wordCountJobId" input-path="${input.path}"
        output-path="${output.path}" jar-by-class="com.xx.poc.Application"
        mapper="com.xx.poc.Map" reducer="com.xx.poc.Reduce" />

    <hdp:job-runner id="wordCountJobRunner" job-ref="wordCountJobId"
        run-at-startup="true" />
</beans>

这篇关于作业提交后的ClassNotFoundException的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆