如何将NLTK语料库添加到Google云功能? [英] How to add NLTK corpora to a google cloud function?
问题描述
我正在尝试运行涉及使用NLTK的Google云功能.我加了 textblob == 0.15.3 nltk == 3.4.3 到require.txt.但是,每次我运行该脚本时,它都会崩溃,并且日志显示请使用NLTK下载程序来获取资源:".
I am trying to run Google cloud function which involves the use of NLTK. I had added textblob == 0.15.3 nltk == 3.4.3 to the requirement.txt. But every time I run the script it crashes and the log shows "Please use the NLTK Downloader to obtain the resource:".
我知道我们需要下载NLTK语料库才能在本地系统中运行脚本.但不确定如何在Google Cloud Functions中下载它.任何帮助将不胜感激.预先感谢.
I know we need to download NLTK corpora to run the script in a local system. But not sure of how to download it in Google Cloud Functions. Any help will be greatly appreciated. Thanks in advance.
推荐答案
这是我通过Travis管道获取nltk_data的方式:
This is how I get the nltk_data through my Travis Pipeline:
# To install the core NLTK package
pip install nltk
# Installs only the extra packages you need. You could also use 'all' instead.
python -m nltk.downloader punkt averaged_perceptron_tagger wordnet
然后,您可以将文件夹复制到功能文件夹中,并将其压缩:
Then you can copy the folder into your function folder, and zip it up:
mkdir -p function/nltk_data/
cp -a ~/nltk_data/. function/nltk_data/
cp -a path/to/your/code/. function/
确保设置NLTK_DATA环境变量.因为我的文件夹结构是
Be sure to set the NLTK_DATA environment variable. As my folder structure was
- nltk_data/
- main.py
- requirements.txt
我只需要设置NLTK_DATA = nltk_data,然后python可以找到文件.
I just needed to set NLTK_DATA=nltk_data, and then python can find the files.
希望这会有所帮助!
这篇关于如何将NLTK语料库添加到Google云功能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!