如何使用NLTK的默认标记器获取跨度而不是字符串? [英] How do I use NLTK's default tokenizer to get spans instead of strings?

查看:112
本文介绍了如何使用NLTK的默认标记器获取跨度而不是字符串?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

NLTK的默认标记器nltk.word_tokenizer链接了两个标记器,一个是句子标记器,另一个是对句子进行操作的单词标记器.开箱即用,效果很好.

NLTK's default tokenizer, nltk.word_tokenizer, chains two tokenizers, a sentence tokenizer and then a word tokenizer that operates on sentences. It does a pretty good job out of the box.

>>> nltk.word_tokenize("(Dr. Edwards is my friend.)")
['(', 'Dr.', 'Edwards', 'is', 'my', 'friend', '.', ')']

除了要让偏移量元组返回原始字符串(而不是字符串标记)之外,我想使用相同的算法.

I'd like to use this same algorithm except to have it return tuples of offsets into the original string instead of string tokens.

按偏移量,我的意思是2位元,可以用作原始字符串的索引.例如,我在这里

By offset I mean 2-ples that can serve as indexes into the original string. For example here I'd have

>>> s = "(Dr. Edwards is my friend.)"
>>> s.token_spans()
[(0,1), (1,4), (5,12), (13,15), (16,18), (19,25), (25,26), (26,27)]

因为s [0:1]是(",s [1:4]是"Dr.",依此类推.

because s[0:1] is "(", s[1:4] is "Dr." and so forth.

是否只有一个NLTK调用可以做到这一点,或者我必须编写自己的偏移量算术?

Is there a single NLTK call that does this, or do I have to write my own offset arithmetic?

推荐答案

至少从NLTK 3.4开始

At least since NLTK 3.4 TreebankWordTokenizer supports span_tokenize:

>>> from nltk.tokenize import TreebankWordTokenizer as twt
>>> list(twt().span_tokenize('What is the airspeed of an unladen swallow ?'))
[(0, 4),
 (5, 7),
 (8, 11),
 (12, 20),
 (21, 23),
 (24, 26),
 (27, 34),
 (35, 42),
 (43, 44)]

这篇关于如何使用NLTK的默认标记器获取跨度而不是字符串?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆