如何在 Hadoop 流作业中包含 python 包? [英] How can I include a python package with Hadoop streaming job?

查看:20
本文介绍了如何在 Hadoop 流作业中包含 python 包?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 Hadoop 流作业中包含一个 python 包 (NLTK),但我不确定如何在不通过 CLI 参数-file"手动包含每个文件的情况下执行此操作.

I am trying include a python package (NLTK) with a Hadoop streaming job, but am not sure how to do this without including every file manually via the CLI argument, "-file".

一种解决方案是在所有从站上安装此软件包,但我目前没有该选项.

One solution would be to install this package on all the slaves, but I don't have that option currently.

推荐答案

我会将包压缩到 .tar.gz.zip 并通过-file 选项中的整个 tarball 或存档到您的 hadoop 命令.我过去曾用 Perl 做过这件事,但没有用过 Python.

I would zip up the package into a .tar.gz or a .zip and pass the entire tarball or archive in a -file option to your hadoop command. I've done this in the past with Perl but not Python.

也就是说,如果您在 zipimport,我认为这仍然对您有用="noreferrer">http://docs.python.org/library/zipimport.html,它允许您直接从 zip 导入模块.

That said, I would think this would still work for you if you use Python's zipimport at http://docs.python.org/library/zipimport.html, which allows you to import modules directly from a zip.

这篇关于如何在 Hadoop 流作业中包含 python 包?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆