如何使用Lucene获取频繁出现的短语 [英] How to get frequently occurring phrases with Lucene

查看:146
本文介绍了如何使用Lucene获取频繁出现的短语的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在Lucene中找到一些经常出现的短语。我从TXT文件中获取了一些信息,因为没有短语的信息,我失去了很多背景信息,例如信息检索被索引为两个单独的单词。

I would like to get some frequently occurring phrases with Lucene. I am getting some information from TXT files, and I am losing a lot of context for not having information for phrases e.g. "information retrieval" is indexed as two separate words.

获取这样的短语的方法是什么?我在互联网上找不到任何有用的东西,所有的建议,链接,提示都特别赞赏!

What is the way to get the phrases like this? I can not find anything useful on internet, all the advices, links, hints especially examples are appreciated!

编辑:我存储的文件只是按标题和内容:

I store my documents just by title and content:

 Document doc = new Document();
 doc.add(new Field("name", f.getName(), Field.Store.YES, Field.Index.NOT_ANALYZED));
 doc.add(new Field("text", fReader, Field.TermVector.WITH_POSITIONS_OFFSETS));

因为我正在做的事情最重要的是文件的内容。标题往往不具有描述性(例如,我有许多PDF学术论文,其标题是代码或数字)。

because for what I am doing the most important is the content of the file. Titles are too often not descriptive at all (e.g., I have many PDF academic papers whose titles are codes or numbers).

我迫切需要索引文本中最常出现的短语内容,刚才我看到这个简单的词袋方法效率不高。

I desperately need to index top occurring phrases from text contents, just now I see how much this simple "bag of words" approach is not efficient.

推荐答案

朱莉娅,看来是什么您正在寻找 n-gram ,特别是 Bigrams (也称为搭配)。

Julia, It seems what you are looking for is n-grams, specifically Bigrams (also called collocations).

这是有关寻找搭配的章节(PDF)来自Manning和Schutze的统计自然语言处理基础

Here's a chapter about finding collocations (PDF) from Manning and Schutze's Foundations of Statistical Natural Language Processing.

为了与Lucene一起使用,我建议使用 Solr 使用 ShingleFilterFactory
有关详细信息,请参阅此讨论

In order to do this with Lucene, I suggest using Solr with ShingleFilterFactory. Please see this discussion for details.

这篇关于如何使用Lucene获取频繁出现的短语的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆