如何避免NLTK的句子标记词在缩写词上出现分裂? [英] How to avoid NLTK's sentence tokenizer splitting on abbreviations?

查看:141
本文介绍了如何避免NLTK的句子标记词在缩写词上出现分裂?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在使用NLTK进行语言处理,但是遇到了句子标记化的问题.

I'm currently using NLTK for language processing, but I have encountered a problem of sentence tokenizing.

这是问题所在: 假设我有一句话:图2显示了美国地图." 当我使用punkt标记生成器时,我的代码如下所示:

Here's the problem: Assume I have a sentence: "Fig. 2 shows a U.S.A. map." When I use punkt tokenizer, my code looks like this:

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
punkt_param = PunktParameters()
abbreviation = ['U.S.A', 'fig']
punkt_param.abbrev_types = set(abbreviation)
tokenizer = PunktSentenceTokenizer(punkt_param)
tokenizer.tokenize('Fig. 2 shows a U.S.A. map.')

它返回以下内容:

['Fig. 2 shows a U.S.A.', 'map.']

令牌生成器无法检测到缩写"U.S.A.",但可以在"fig"上使用. 现在,当我使用默认令牌生成器时,NLTK提供了:

The tokenizer can't detect the abbreviation "U.S.A.", but it worked on "fig". Now when I use the default tokenizer NLTK provides:

import nltk
nltk.tokenize.sent_tokenize('Fig. 2 shows a U.S.A. map.')

这次我得到:

['Fig.', '2 shows a U.S.A. map.']

它可以识别更常见的美国".但看不到图"!

It recognizes the more common "U.S.A." but fails to see "fig"!

如何结合这两种方法?我想使用默认的缩写选项以及添加自己的缩写.

How can I combine these two methods? I want to use default abbreviation choices as well as adding my own abbreviations.

推荐答案

我认为缩写列表中的美国小写对您来说效果很好 试试这个,

I think lower case for u.s.a in abbreviations list will work fine for you Try this,

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
punkt_param = PunktParameters()
abbreviation = ['u.s.a', 'fig']
punkt_param.abbrev_types = set(abbreviation)
tokenizer = PunktSentenceTokenizer(punkt_param)
tokenizer.tokenize('Fig. 2 shows a U.S.A. map.')

它返回给我:

['Fig. 2 shows a U.S.A. map.']

这篇关于如何避免NLTK的句子标记词在缩写词上出现分裂?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆