当我使用hadoop streaming时,如何在HDFS中导入nltk语料库 [英] How to import nltk corpus in HDFS when I use hadoop streaming

查看:130
本文介绍了当我使用hadoop streaming时,如何在HDFS中导入nltk语料库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

 我遇到了一个小问题,我想在hdfs中使用nltk语料库,但是失败了。例如,我想在我的python代码中加载nltk.stopwords。 
我使用这个http://eigenjoy.com/2009/11/18/how-to-use-cascading-with-hadoop-streaming/

我都这样说,但我不知道如何在工作中对其进行转换。我的nltk文件名是nltk-2.0.1.rc1,我的pyam文件名是PyYAML.3.0.1,所以我的commad是:

  zip -r nltkandyaml.zip nltk-2.0.1.rc1 PyYAML.3.0.1 

然后它mv ntlkandyaml.zip /path/to/where/your/mapper/will/be/nltkandyaml.mod



我的mapper.py保存在/ home / mapreduce /mapper.py所以我的命令是:

  mv ntlkandyaml.zip /home/mapreduce/nltkandyaml.mod 

是吗?



然后我压缩我的语料库停用词:

  zip -r /nltk_data/corpora/stopwords-flat.zip * 

在我的代码中,我使用:

  importer = zipimport .zipimporter('nltkandyaml.mod')
yaml = importer.load_module('PyYAML-3.09')
nltk = importer.load_module('nltk-2.1.0.1rc1')
from nltk .corpus.reader从nltk.corpus.reader导入停用词
import StopWordsCorpusReader
nltk.data.path + = [。]
st opwords = StopWordsCorpusReader(nltk.data.find('lib / stopwords-flat.zip'))

最后我用命令:

  bin / hadoop jar /home/../streaming/hadoop-0.21.0-streaming.jar -input 
/user/root/input/voa.txt -output / user / root / output -mapper /home/../mapper.py -reducer
/home/../reducer.py -file /home/../nltkandyaml.mod -file /home/../stopwords-flat.zip

请告诉我我错在何处

谢谢大家

解决方案

  zip -r [your-nltk-package-name / nltk] nltk.zip 

zip -r [your-yaml-package-name /然后在脚本中添加:






$ b



  importer = zipimport.zipimporter('nltk.zip')
importer2 = zipimport.zipimporter('yaml.zip')
yaml = importer2.load_module('yaml')
nltk = importer.load_module('nltk')

在您的命令中,添加:

  -file [路径到您的zip文件] 


 I got a little problem I want to use nltk corpus in hdfs,But failed.For example I want to load nltk.stopwords in my python code.
 I use this http://eigenjoy.com/2009/11/18/how-to-use-cascading-with-hadoop-streaming/

I do all that say,but I don't know how to transform it in my work. My nltk file name is nltk-2.0.1.rc1 my pyam file name is PyYAML.3.0.1 so my commad is:

zip -r nltkandyaml.zip nltk-2.0.1.rc1 PyYAML.3.0.1

then it said "mv ntlkandyaml.zip /path/to/where/your/mapper/will/be/nltkandyaml.mod"

My mapper.py save in /home/mapreduce/mapper.py so my command is:

mv ntlkandyaml.zip /home/mapreduce/nltkandyaml.mod

is that right?

then i zip my corpus stopwords:

zip -r /nltk_data/corpora/stopwords-flat.zip *

In my code I use:

importer = zipimport.zipimporter('nltkandyaml.mod')
yaml = importer.load_module('PyYAML-3.09')
nltk = importer.load_module('nltk-2.1.0.1rc1')
from nltk.corpus.reader import stopwords
from nltk.corpus.reader import StopWordsCorpusReader
nltk.data.path+=["."]
stopwords = StopWordsCorpusReader(nltk.data.find('lib/stopwords-flat.zip'))

finally I use command:

bin/hadoop jar /home/../streaming/hadoop-0.21.0-streaming.jar -input  
/user/root/input/voa.txt -output /user/root/output -mapper /home/../mapper.py -reducer  
/home/../reducer.py -file /home/../nltkandyaml.mod -file /home/../stopwords-flat.zip

please tell me where I'm wrong

thank you all

解决方案

    zip -r [your-nltk-package-name/nltk] nltk.zip

    zip -r [your-yaml-package-name/lib/yaml] yaml.zip

then in your script, add:

    importer = zipimport.zipimporter('nltk.zip')
    importer2=zipimport.zipimporter('yaml.zip')
    yaml = importer2.load_module('yaml')
    nltk = importer.load_module('nltk')

in your command, add:

    -file [path-to-your-zip-file]

这篇关于当我使用hadoop streaming时,如何在HDFS中导入nltk语料库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆