如何定义特殊的“不可令牌化"? nltk.word_tokenize的单词 [英] How to define special "untokenizable" words for nltk.word_tokenize
本文介绍了如何定义特殊的“不可令牌化"? nltk.word_tokenize的单词的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在使用nltk.word_tokenize
标记某些句子,这些句子包含编程语言,框架等,这些句子被错误地标记了.
I'm using nltk.word_tokenize
for tokenizing some sentences which contain programming languages, frameworks, etc., which get incorrectly tokenized.
例如:
>>> tokenize.word_tokenize("I work with C#.")
['I', 'work', 'with', 'C', '#', '.']
是否可以将这样的例外"列表输入令牌生成器?我已经整理了一份我不想拆分的所有内容(语言等)的列表.
Is there a way to enter a list of "exceptions" like this to the tokenizer? I already have compiled a list of all the things (languages, etc.) that I don't want to split.
推荐答案
The Multi Word Expression Tokenizer should be what you need.
您将例外列表添加为元组,并将已经标记化的句子传递给它:
You add the list of exceptions as tuples and pass it the already tokenized sentences:
tokenizer = nltk.tokenize.MWETokenizer()
tokenizer.add_mwe(('C', '#'))
tokenizer.add_mwe(('F', '#'))
tokenizer.tokenize(['I', 'work', 'with', 'C', '#', '.'])
['I', 'work', 'with', 'C_#', '.']
tokenizer.tokenize(['I', 'work', 'with', 'F', '#', '.'])
['I', 'work', 'with', 'F_#', '.']
这篇关于如何定义特殊的“不可令牌化"? nltk.word_tokenize的单词的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文