自由hadoop java编译的问题 [英] Probleme of liberary hadoop java compilation

查看:109
本文介绍了自由hadoop java编译的问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

 

我尝试在本地的eclipse上执行并编译此代码java mapreduce,但这个问题出现了请帮助问题在哪里?< br $> b $ b



这是出现的错误:

 WARN util。 NativeCodeLoader:无法为您的平台加载native-hadoop库...使用适用的builtin-java类
线程main中的异常java.lang.ArrayIndexOutOfBoundsException:1
at LogFile.TraitServeur.main( TraitServeur.java:63)







行错误63与输出格式有关:

 FileOutputFormat.setOutputPath(conf,new Path(args [1]))





我尝试过:



这是我的代码来源



< pre lang =java> import java.io.IOException;
import java.util 。*;
import org.apache.hadoop.fs.Path;
// import org.apache.hadoop.conf。*; / * Package de apachehadooputilisédansle développement* /
import org.apache.hadoop.io。*;
import org.apache.hadoop.mapred。*;
// import org.apache.hadoop.util。*;
< span class =code-keyword> public class TraitServeur {
// 阶段地图
public static TokenizerMapper extends MapReduceBase implements Mapper< LongWritable,
Text,Text,IntWritable> {
private final static IntWritable one = new IntWritable( 1 );
private 文本地图= Text();
public void map(LongWritable键,Text值,OutputCollector< Text,IntWritable>输出,记者报道) throws IOException {
String line = value.toString();
字符串 [] rows = line.split( \\\\ +);
StringTokenizer tokenizer = new StringTokenizer(rows [ 3 ]);
tokenizer = new StringTokenizer(rows [ 3 ]);
int count = 0 ;
字符串日期=行[ 0 ];
字符串 day = rows [ 1 ];
字符串 gravite = rows [ 3 ];
while (tokenizer.hasMoreTokens()){
map.set(tokenizer.nextToken());
map.set(date + day + \t + gravite);
output.collect(map,one);
count = count + 1;
}
}
}
// 阶段缩减
public static class Reduce extends MapReduceBase implements Reducer< Text,
IntWritable,Text,IntWritable> {
public void reduce(文本键,Iterator< IntWritable>值,OutputCollector< Text,
IntWritable>输出,Reporter记者) throws IOException {
int sum = < span class =code-digit> 0 ;
while (values.hasNext()){
sum + = values.next()。get(); }
output.collect(key, new IntWritable(sum));
}
}
}
public static < span class =code-keyword> void main( String [] args) throws 例外{
JobConf conf = new JobConf(TraitServeur。 class );
conf.setJobName( dpgs);
conf.setOutputKeyClass(Text。 class );
conf.setOutputValueClass(IntWritable。 class );
conf.setMapperClass(TokenizerMapper。 class );
conf.setCombinerClass(Reduce。 class );
conf.setReducerClass(Reduce。 class );
conf.setInputFormat(TextInputFormat。 class );
conf.setOutputFormat(TextOutputFormat。 class );
FileInputFormat.setInputPaths(conf, new 路径(args [ 0 ]));
FileOutputFormat.setOutputPath(conf, new 路径(args [ 1 ]));
JobClient.runJob(conf);
}

解决方案

Quote:

行错误63与输出格式有关:

 FileOutputFormat.setOutputPath(conf, new 路径(args [ 1 ])); 

并且错误消息是

 java.lang.ArrayIndexOutOfBoundsException 

因此在执行应用程序时没有第二个命令行参数。





你必须执行应用程序如

 NameOfApp InputPath OutputPath 

或更好地添加代码以检查是否存在所有必需的命令行参数:

  public   static   void  main( String  [] args) throws 异常{
if (args.length< 2 ){
System.err.println( 必须传递InputPath和OutputPath。);
System.exit( 1 );
}
// ...
}


是的,谢谢你真的是我没有做出一个输出路径的问题,但它向我展示了另一个关于本机librery haddoop和另一个错误的alawys:



< pre> 2018-05-28 16:27:24,687 WARN [main] util.NativeCodeLoader(NativeCodeLoader.java:<clinit>(62 )) - 无法为您的平台加载native-hadoop库...使用适用的builtin-java类
2018-05-28 16:27:29,189 INFO [main] Configuration.deprecation(Configuration.java:warnOnceIfDeprecated (1274)) - 不推荐使用session.id.相反,使用dfs.metrics.session-id
2018-05-28 16:27:29,193 INFO [main] jvm.JvmMetrics(JvmMetrics.java:init(76)) - 使用processName = JobTracker初始化JVM指标, sessionId =
2018-05-28 16:27:29,251 INFO [main] jvm.JvmMetrics(JvmMetrics.java:init(71)) - 无法使用processName = JobTracker初始化JVM指标,sessionId = - 已初始化
线程main中的异常java.lang.NoClassDefFoundError:org / codehaus / jackson / map / JsonMappingException
at org.apache.hadoop.mapreduce.Job.getJobSubmitter(Job.java:1291)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1302)
at org.apache.hadoop.mapred.JobClient


1.run(JobClient。 java:578)
at org.apache.hadoop.mapred.JobClient


i try to execute and compile this code java mapreduce on my eclipse in local, but this probleme is showed up please help where is the issue?


and this is the error showed up:

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
    at LogFile.TraitServeur.main(TraitServeur.java:63)"




the line error 63 is about the output format:

FileOutputFormat.setOutputPath(conf, new Path(args[1]))



What I have tried:

this is my code source

import java.io.IOException ; 
import java.util.* ;
import org.apache.hadoop.fs.Path ;
//import org.apache.hadoop.conf.* ;/*Package de apache hadoop utilisé dans le développement*/
import org.apache.hadoop.io.* ;
import org.apache.hadoop.mapred.* ;
//import org.apache.hadoop.util.* ;
public class TraitServeur {
    //phase Map
public static class TokenizerMapper extends MapReduceBase implements Mapper<LongWritable,
Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1) ;
private Text map = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 
String line = value.toString() ;
String[] rows = line.split("\\s+") ; 
StringTokenizer tokenizer = new StringTokenizer(rows[3]);
tokenizer = new StringTokenizer(rows[3]) ;
int count = 0 ;
String date = rows[0] ;
String day = rows[1] ; 
String gravite = rows[3] ;
while (tokenizer.hasMoreTokens()) {
map.set(tokenizer.nextToken()) ;
map.set(date + day + "\t" + gravite) ;
output.collect(map, one) ;
count= count+1;
}
}
}
//phase reduce
public static class Reduce extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> { 
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text,
IntWritable> output, Reporter reporter) throws IOException {
int sum = 0 ;
while (values.hasNext()) {
sum += values.next().get() ;  }
output.collect(key, new IntWritable(sum)) ;
}
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(TraitServeur.class) ; 
conf.setJobName("dpgs") ; 
conf.setOutputKeyClass(Text.class) ; 
conf.setOutputValueClass(IntWritable.class) ;
conf.setMapperClass(TokenizerMapper.class) ; 
conf.setCombinerClass(Reduce.class) ;
conf.setReducerClass(Reduce.class) ;
conf.setInputFormat(TextInputFormat.class) ; 
conf.setOutputFormat(TextOutputFormat.class) ;
FileInputFormat.setInputPaths(conf, new Path(args[0])) ; 
FileOutputFormat.setOutputPath(conf, new Path(args[1])) ;
JobClient.runJob(conf) ;
}

解决方案

Quote:

the line error 63 is about the output format:

FileOutputFormat.setOutputPath(conf, new Path(args[1]));

and the error message is

java.lang.ArrayIndexOutOfBoundsException

So there is no second command line argument present when executing the application.


You have to execute the application like

NameOfApp InputPath OutputPath

or better add code to check if all required command line parameters are present:

public static void main(String[] args) throws Exception {
    if (args.length < 2) {
        System.err.println("Must pass InputPath and OutputPath.");
        System.exit(1);
    }
    // ...
}


yeah thank you realy it was the probleme that i didn't made a outputPath but it showed me another error alawys about native librery haddoop and another one:

<pre>2018-05-28 16:27:24,687 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-05-28 16:27:29,189 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1274)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2018-05-28 16:27:29,193 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2018-05-28 16:27:29,251 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
Exception in thread "main" java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException
	at org.apache.hadoop.mapreduce.Job.getJobSubmitter(Job.java:1291)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1302)
	at org.apache.hadoop.mapred.JobClient


1.run(JobClient.java:578) at org.apache.hadoop.mapred.JobClient


这篇关于自由hadoop java编译的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆