如何防止在NLTK中拆分特定的单词或词组和数字? [英] How to prevent splitting specific words or phrases and numbers in NLTK?

查看:239
本文介绍了如何防止在NLTK中拆分特定的单词或词组和数字?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我对分割特定单词,日期和数字的文本进行标记化时,我在文本匹配中遇到问题.如何在将NLTK中的单词标记化时防止在家中奔跑",步行30分钟"或每天4次"之类的短语分裂?

I have a problem in text matching when I tokenize text that splits specific words, dates and numbers. How can I prevent some phrases like "run in my family" ,"30 minute walk" or "4x a day" from splitting at the time of tokenizing words in NLTK?

它们不应导致:

['runs','in','my','family','4x','a','day']

例如:

每天可以骑20-30分钟的自行车,效果很好!

Yes 20-30 minutes a day on my bike, it works great!!

给予:

['yes','20-30','minutes','a','day','on','my','bike',',','it','works','great']

我希望将"20-30分钟"视为一个字.我如何获得这种行为>?

I want '20-30 minutes' to be treated as a single word. How can I get this behavior>?

推荐答案

据我所知,您将很难在标记化的同时保留各种长度的n-gram,但是您可以找到这些n-gram作为在此处中显示.然后,您可以用一些连字符(例如破折号)将想要的语料库中的项作为n-gram替换.

You will be hard pressed to preserve n-grams of various length at the same time as tokenizing, to my knowledge, but you can find these n-grams as shown here. Then, you could replace the items in the corpus you want as n-grams with some joining character like dashes.

这是一个示例解决方案,但是可能有很多方法可以实现. 重要注意事项:我提供了一种查找文本中常见的ngram的方法(您可能需要1个以上的ngram,因此我在其中放置了一个变量,以便您可以确定要收集多少个ngram.您可能希望每种类型使用不同的数字,但我现在只提供了1个变量.)这可能会丢失您认为重要的ngram.为此,您可以将要查找的内容添加到user_grams.这些将被添加到搜索中.

This is an example solution, but there are probably lots of ways to get there. Important note: I provided a way to find ngrams that are common in the text (you will probably want more than 1, so I put a variable there so that you can decide how many of the ngrams to collect. You might want a different number for each kind, but I only gave 1 variable for now.) This may miss ngrams you find important. For that, you can add ones you want to find to user_grams. Those will get added to the search.

import nltk 

#an example corpus
corpus='''A big tantrum runs in my family 4x a day, every week. 
A big tantrum is lame. A big tantrum causes strife. It runs in my family 
because of our complicated history. Every week is a lot though. Every week
I dread the tantrum. Every week...Here is another ngram I like a lot'''.lower()

#tokenize the corpus
corpus_tokens = nltk.word_tokenize(corpus)

#create ngrams from n=2 to 5
bigrams = list(nltk.ngrams(corpus_tokens,2))
trigrams = list(nltk.ngrams(corpus_tokens,3))
fourgrams = list(nltk.ngrams(corpus_tokens,4))
fivegrams = list(nltk.ngrams(corpus_tokens,5))

本节可以找到最多为n_grams的普通ngram.

This section finds common ngrams up to five_grams.

#if you change this to zero you will only get the user chosen ngrams
n_most_common=1 #how many of the most common n-grams do you want.

fdist_bigrams = nltk.FreqDist(bigrams).most_common(n_most_common) #n most common bigrams
fdist_trigrams = nltk.FreqDist(trigrams).most_common(n_most_common) #n most common trigrams
fdist_fourgrams = nltk.FreqDist(fourgrams).most_common(n_most_common) #n most common four grams
fdist_fivegrams = nltk.FreqDist(fivegrams).most_common(n_most_common) #n most common five grams

#concat the ngrams together
fdist_bigrams=[x[0][0]+' '+x[0][1] for x in fdist_bigrams]
fdist_trigrams=[x[0][0]+' '+x[0][1]+' '+x[0][2] for x in fdist_trigrams]
fdist_fourgrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3] for x in fdist_fourgrams]
fdist_fivegrams=[x[0][0]+' '+x[0][1]+' '+x[0][2]+' '+x[0][3]+' '+x[0][4]  for x in fdist_fivegrams]

#next 4 lines create a single list with important ngrams
n_grams=fdist_bigrams
n_grams.extend(fdist_trigrams)
n_grams.extend(fdist_fourgrams)
n_grams.extend(fdist_fivegrams)

此部分允许您将自己的ngram添加到列表中

This section lets you add your own ngrams to a list

#Another option here would be to make your own list of the ones you want
#in this example I add some user ngrams to the ones found above
user_grams=['ngram1 I like', 'ngram 2', 'another ngram I like a lot']
user_grams=[x.lower() for x in user_grams]    

n_grams.extend(user_grams)

最后一部分执行处理,以便您可以再次令牌化并获取ngram作为令牌.

And this last part performs the processing so that you can tokenize again and get the ngrams as tokens.

#initialize the corpus that will have combined ngrams
corpus_ngrams=corpus

#here we go through the ngrams we found and replace them in the corpus with
#version connected with dashes. That way we can find them when we tokenize.
for gram in n_grams:
    gram_r=gram.replace(' ','-')
    corpus_ngrams=corpus_ngrams.replace(gram, gram.replace(' ','-'))

#retokenize the new corpus so we can find the ngrams
corpus_ngrams_tokens= nltk.word_tokenize(corpus_ngrams)

print(corpus_ngrams_tokens)

Out: ['a-big-tantrum', 'runs-in-my-family', '4x', 'a', 'day', ',', 'every-week', '.', 'a-big-tantrum', 'is', 'lame', '.', 'a-big-tantrum', 'causes', 'strife', '.', 'it', 'runs-in-my-family', 'because', 'of', 'our', 'complicated', 'history', '.', 'every-week', 'is', 'a', 'lot', 'though', '.', 'every-week', 'i', 'dread', 'the', 'tantrum', '.', 'every-week', '...']

我认为这实际上是一个很好的问题.

I think this is actually a very good question.

这篇关于如何防止在NLTK中拆分特定的单词或词组和数字?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆