使用NLTK创建新的语料库 [英] Creating a new corpus with NLTK

查看:128
本文介绍了使用NLTK创建新的语料库的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我认为标题的答案通常是去阅读文档,但是我浏览了 NLTK书,但没有给出答案.我是Python的新手.

I reckoned that often the answer to my title is to go and read the documentations, but I ran through the NLTK book but it doesn't give the answer. I'm kind of new to Python.

我有一堆.txt文件,我希望能够使用NLTK为语料库nltk_data提供的语料库功能.

I have a bunch of .txt files and I want to be able to use the corpus functions that NLTK provides for the corpus nltk_data.

我已经尝试过PlaintextCorpusReader,但我无法超越:

I've tried PlaintextCorpusReader but I couldn't get further than:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()

如何使用punkt细分newcorpus句子?我尝试使用punkt函数,但punkt函数无法读取PlaintextCorpusReader类?

How do I segment the newcorpus sentences using punkt? I tried using the punkt functions but the punkt functions couldn't read PlaintextCorpusReader class?

您还可以引导我将分割后的数据写入文本文件吗?

Can you also lead me to how I can write the segmented data into text files?

推荐答案

我认为PlaintextCorpusReader已经使用punkt标记器对输入进行了细分,至少在您的输入语言是英语的情况下.

I think the PlaintextCorpusReader already segments the input with a punkt tokenizer, at least if your input language is english.

PlainTextCorpusReader的构造函数

def __init__(self, root, fileids,
             word_tokenizer=WordPunctTokenizer(),
             sent_tokenizer=nltk.data.LazyLoader(
                 'tokenizers/punkt/english.pickle'),
             para_block_reader=read_blankline_block,
             encoding='utf8'):

您可以向读者传递一个单词和句子标记器,但是对于后者,默认值已经是nltk.data.LazyLoader('tokenizers/punkt/english.pickle').

You can pass the reader a word and sentence tokenizer, but for the latter the default already is nltk.data.LazyLoader('tokenizers/punkt/english.pickle').

对于单个字符串,将按以下方式使用标记符(解释此处,有关punkt标记生成器,请参见第5节.

For a single string, a tokenizer would be used as follows (explained here, see section 5 for punkt tokenizer).

>>> import nltk.data
>>> text = """
... Punkt knows that the periods in Mr. Smith and Johann S. Bach
... do not mark sentence boundaries.  And sometimes sentences
... can start with non-capitalized words.  i is a good variable
... name.
... """
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> tokenizer.tokenize(text.strip())

这篇关于使用NLTK创建新的语料库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆