无法运行hadoop wordcount示例? [英] unable to run hadoop wordcount example?

查看:158
本文介绍了无法运行hadoop wordcount示例?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在vmware的ubuntu 12.04单节点环境中运行hadoop wordcount示例。
i运行这样的例子:

$ p $ hadoop @ master:〜/ hadoop $ hadoop jar hadoop-examples- 1.0.4.jar wordcount
/ home / hadoop / gutenberg / / home / hadoop / gutenberg -out输出

我在下面的位置输入文件:

  / home / hadoop / gutenberg 

和输出文件的位置是:

  / home / hadoop / gutenberg -output 

当我运行wordcount程序时出现以下错误: -

  13/04/18 06:02:10信息mapred.JobClient:清理临时区域
hdfs:// localhost:54310 / home / hadoop / tmp / mapred / staging / hadoop / .staging / job_201304180554_0001
13/04/18 06:02:10 ERROR security.UserGroupInformation:PriviledgedActionException
as: hadoop原因:org.apache.hadoop.mapred.FileAlreadyExistsException:输出目录
/ home / hadoop / gutenberg-output已存在
org.a pache.hadoop.mapred.FileAlreadyExistsException:输出目录
/ home / hadoop / gutenberg-output已存在于

org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat .j
ava:137)at org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java:887)at
org.apache.hadoop.mapred.JobClient $ 2.run(JobClient。

中的$ javax.security.auth.Subject.doAs(Subject.java:416)java.security.AccessController.doPrivileged(Native Method)
org.apache。 hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)at
org.apache.hadoop.mapreduce。
中的Job.submit(Job.java:500)org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)at
org.apache.hadoop.examples.WordCount.main( WordCount.java:67)在
sun.reflect.NativeMethodAccessorImpl.invoke0(本地方法)在
sun.reflect.NativeMethodAccessorImpl。在
调用(NativeMethodAccessorImpl.java:57)sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)$ b $ at java.lang.reflect.Method.invoke(Method.java:616)at $ $ b $ org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)$ b $ org.apache.hadoop.util.ProgramDriver $ ProgramDescription.invoke(ProgramDriver.java:68)
b $ or $
处的org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl。
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)at
org .apache.hadoop.util.RunJar.main(RunJar.java:156)hadoop @ master:〜/ hadoop $ bin / stop-
all.sh警告:$ HADOOP_HOME已弃用。停止jobtracker localhost:停止
tasktracker停止namenode localhost:停止datanode localhost:停止
secondarynamenode hadoop @ master:〜/ hadoop $


解决方案

删除已经存在的输出文件,或者输出到不同的文件。

(我有点好奇你对错误信息的其他解释。)


I am running hadoop wordcount example in single node environment on ubuntu 12.04 in vmware. i running the example like this:--

hadoop@master:~/hadoop$ hadoop jar hadoop-examples-1.0.4.jar wordcount    
/home/hadoop/gutenberg/ /home/hadoop/gutenberg-output

i have input file at below location:

/home/hadoop/gutenberg

and location for output file is:

    /home/hadoop/gutenberg-output

when i run wordcount program i am getting following errors:--

 13/04/18 06:02:10 INFO mapred.JobClient: Cleaning up the staging area     
hdfs://localhost:54310/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201304180554_0001       
13/04/18 06:02:10 ERROR security.UserGroupInformation: PriviledgedActionException       
as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists 
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory 
/home/hadoop/gutenberg-output already exists at 

org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.j 
ava:137) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:887) at 
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:416) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at   
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850) at  
org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at  
org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) at 
org.apache.hadoop.examples.WordCount.main(WordCount.java:67) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at 
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:616) at   
org.apache.hadoop.util.RunJar.main(RunJar.java:156) hadoop@master:~/hadoop$ bin/stop-
all.sh Warning: $HADOOP_HOME is deprecated. stopping jobtracker localhost: stopping   
tasktracker stopping namenode localhost: stopping datanode localhost: stopping 
secondarynamenode    hadoop@master:~/hadoop$

解决方案

Delete the output file that already exists, or output to a different file.

(I'm a little curious what other interpretations of the error message you considered.)

这篇关于无法运行hadoop wordcount示例?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆