在运行Hadoop wordcount示例时未找到Job Token文件 [英] Job Token file not found when running Hadoop wordcount example

查看:111
本文介绍了在运行Hadoop wordcount示例时未找到Job Token文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我刚刚在一个小群集上成功安装了Hadoop。现在我试图运行wordcount示例,但是我收到了这个错误: localhost:54310 / user / myname / test11
12/04/24 13:26:45 INFO input.FileInputFormat:要输入的总输入路径:1
12/04/24 13:26:45 INFO mapred.JobClient:正在运行的作业:job_201204241257_0003
12/04/24 13:26:46信息mapred.JobClient:地图0%减少0%
12/04/24 13:26:50信息mapred。 JobClient:任务ID:attempt_201204241257_0003_m_000002_0,状态:FAILED
错误初始化attempt_201204241257_0003_m_000002_0:
java.io.IOException:异常读取文件:/ tmp / mapred / local / ttprivate / taskTracker / myname / jobcache / job_201204241257_0003 / jobToken
at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135)
at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165)
at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179)
at org.apache.hadoop.mapred.TaskT racker.localizeJob(TaskTracker.java:1116)
在org.apache.hadoop.mapred.TaskTracker $ 5.run(TaskTracker.java:2404)
在java.lang.Thread.run(Thread.java :722)
导致:java.io.FileNotFoundException:文件文件:/ tmp / mapred / local / ttprivate / taskTracker / myname / jobcache / job_201204241257_0003 / jobToken不存在。
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
(org.apache.hadoop.fs.ChecksumFileSystem $ ChecksumFSInputChecker。< init>(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
在org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
在org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129)
... 5更多

任何帮助?

解决方案

我刚刚通过这个相同的错误 - 在我的Hadoop目录上递归设置权限并没有帮助。遵循 Mohyt 的建议,我修改了 core-site.xml (在hadoop / conf /目录中)删除我指定临时目录(XML中的 hadoop.tmp.dir )的位置。在允许Hadoop创建自己的临时目录之后,我运行无误。


I just installed Hadoop successfully on a small cluster. Now I'm trying to run the wordcount example but I'm getting this error:

****hdfs://localhost:54310/user/myname/test11
12/04/24 13:26:45 INFO input.FileInputFormat: Total input paths to process : 1
12/04/24 13:26:45 INFO mapred.JobClient: Running job: job_201204241257_0003
12/04/24 13:26:46 INFO mapred.JobClient:  map 0% reduce 0%
12/04/24 13:26:50 INFO mapred.JobClient: Task Id : attempt_201204241257_0003_m_000002_0, Status : FAILED
Error initializing attempt_201204241257_0003_m_000002_0:
java.io.IOException: Exception reading file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135)
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165)
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179)
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116)
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.FileNotFoundException: File file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken does not exist.
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129)
    ... 5 more

Any help?

解决方案

I just worked through this same error--setting the permissions recursively on my Hadoop directory didn't help. Following Mohyt's recommendation here, I modified core-site.xml (in the hadoop/conf/ directory) to remove the place where I specified the temp directory (hadoop.tmp.dir in the XML). After allowing Hadoop to create its own temp directory, I'm running error-free.

这篇关于在运行Hadoop wordcount示例时未找到Job Token文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆