Hive 中的字数统计程序 [英] Word Count program in Hive
问题描述
我正在尝试学习 Hive.令人惊讶的是,我找不到如何编写简单的字数统计工作的示例.以下是正确的吗?
I'm trying to learn Hive. Surprisingly, I can't find an example of how to write a simple word count job. Is the following correct?
假设我有一个输入文件 input.tsv
:
Let's say I have an input file input.tsv
:
hello, world
this is an example input file
我在 Python 中创建了一个拆分器来将每一行变成单词:
I create a splitter in Python to turn each line into words:
import sys
for line in sys.stdin:
for word in line.split():
print word
然后我的 Hive 脚本中有以下内容:
And then I have the following in my Hive script:
CREATE TABLE input (line STRING);
LOAD DATA LOCAL INPATH 'input.tsv' OVERWRITE INTO TABLE input;
-- temporary table to hold words...
CREATE TABLE words (word STRING);
add file splitter.py;
INSERT OVERWRITE TABLE words
SELECT TRANSFORM(text)
USING 'python splitter.py'
AS word
FROM input;
SELECT word, count(*) AS count FROM words GROUP BY word;
我不确定我是否遗漏了什么,或者它是否真的这么复杂.(特别是需要临时的words
表吗,需要写外分器函数吗?)
I'm not sure if I'm missing something, or if it really is this complicated. (In particular, do I need the temporary words
table, and do I need to write the external splitter function?)
推荐答案
如果您想要一个简单的,请参阅以下内容:
If you want a simple one see the following:
SELECT word, COUNT(*) FROM input LATERAL VIEW explode(split(text, ' ')) lTable as word GROUP BY word;
我使用横向视图来启用表值函数 (explode) 的使用,该函数采用拆分函数产生的列表并为每个值输出一个新行.在实践中,我使用包装了 IBM 的 ICU4J 分词器的 UDF.我通常不使用转换脚本,而对所有内容都使用 UDF.您不需要临时单词表.
I use a lateral view to enable the use of a table valued function (explode) which takes the list that comes out of split function and outputs a new row for every value. In practice I use a UDF that wraps IBM's ICU4J word breaker. I generally don't use transform scripts and use UDFs for everything. You don't need a temporary words table.
这篇关于Hive 中的字数统计程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!