Nutch Crawler错误:拒绝拒绝 [英] Nutch Crawler error: Premission denied

查看:262
本文介绍了Nutch Crawler错误:拒绝拒绝的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试运行基本的抓取工具。从NutchTutorial收到命令:bin / crawl urls -dir crawl -depth 3 -topN 5



(完成所有预设之后)

我从Windows运行,所以我已经安装了cygwin64作为一个运行环境



我没有看到任何问题,当我运行bin / nutch从nutch主目录,但是当我尝试像上面那样运行抓取时,出现以下错误:

 注射器:从2014年开始-11-29 11:31:35 
注入器:crawlDb:-dir / crawldb
注入器:urlDir:url
注入器:将注入的URL转换为抓取数据库条目。
Injector:java.io.IOException:无法设置路径的权限:\ tmp \hadoop -Er
an\mapred\staging\Eran996102549\.staging为0700
在org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
at org。 apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSys
tem.java:514)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.jav
a:349)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:19
3)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmi
ssionFiles。
at org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java: 936)$ java.util.AccessController.doPrivileged(Native方法)
在javax.security处
。 auth.Subject.doAs(未知源)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1190)
在org.apache.hadoop.mapred。 JobClient.submitJobInternal(JobClient.java:9
36)
在org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
在org.apache.hadoop.mapred .JobClient.runJob(JobClient.java:1353)
在org.apache.nutch.crawl.Injector.inject(Injector.java:324)
在org.apache.nutch.crawl.Injector.run (Injector.java:380)
在org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
在org.apache.nutch.crawl.Injector.main(Injector.java :370)

Theres在本教程中没有提及该错误。
我该怎么办?

解决方案

这是一个权限问题。您应该将读取,写入和执行权限设置为Hadoop配置文件中的< name> hadoop.tmp.dir< / name> 网站。)。



希望得到这个帮助,



Le Quoc Do


I am trying to run a basic crawler. Got the command from the NutchTutorial: bin/crawl urls -dir crawl -depth 3 -topN 5

(after doing all the presets)

Im running from windows so I've installed cygwin64 as a running environment

I don't see any problems when I run bin/nutch from the nutch home directory, but when I try to run the crawl as above I get the following error:

Injector: starting at 2014-11-29 11:31:35
Injector: crawlDb: -dir/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: java.io.IOException: Failed to set permissions of path: \tmp\hadoop-Er
an\mapred\staging\Eran996102549\.staging to 0700
        at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
        at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSys
tem.java:514)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.jav
a:349)
        at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:19
3)
        at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmi
ssionFiles.java:126)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Unknown Source)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:9
36)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at org.apache.nutch.crawl.Injector.inject(Injector.java:324)
        at org.apache.nutch.crawl.Injector.run(Injector.java:380)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.nutch.crawl.Injector.main(Injector.java:370)

Theres no reference to that error on thier tutorial. What do I do ?

解决方案

This is a permission problem. You should set Read,Write and Execute permission to the folder (<name>hadoop.tmp.dir</name> value in the Hadoop configuration file core-site.xml).

Hope this help,

Le Quoc Do

这篇关于Nutch Crawler错误:拒绝拒绝的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆