使用 NLTK 创建新语料库 [英] Creating a new corpus with NLTK
问题描述
我认为我的标题的答案通常是去阅读文档,但我浏览了NLTK书 但它没有给出答案.我对 Python 有点陌生.
我有一堆 .txt
文件,我希望能够使用 NLTK 为语料库 nltk_data
提供的语料库函数.
我已经尝试过 PlaintextCorpusReader
但我无法进一步:
>>>import nltk>>>from nltk.corpus import PlaintextCorpusReader>>>corpus_root = './'>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')>>>newcorpus.words()
如何使用 punkt 分割 newcorpus
句子?我尝试使用 punkt 函数,但 punkt 函数无法读取 PlaintextCorpusReader
类?
您能否也指导我如何将分段数据写入文本文件?
我认为 PlaintextCorpusReader
已经使用 punkt 分词器对输入进行了分割,至少如果您的输入语言是英语.
def __init__(self, root, fileids,word_tokenizer=WordPunctTokenizer(),sent_tokenizer=nltk.data.LazyLoader('tokenizers/punkt/english.pickle'),para_block_reader=read_blankline_block,编码='utf8'):
您可以向读者传递一个词和句子标记器,但后者的默认值已经是 nltk.data.LazyLoader('tokenizers/punkt/english.pickle')
.
对于单个字符串,将按如下方式使用分词器(在 here,请参阅第 5 节了解 punkt 标记器).
<预><代码>>>>导入 nltk.data>>>文字 = """... Punkt 知道 Smith 先生和 Johann S. Bach 中的时期...不要标记句子边界.有时还有句子... 可以以非大写的单词开头.我是一个很好的变量... 名称.……">>>tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')>>>tokenizer.tokenize(text.strip())I reckoned that often the answer to my title is to go and read the documentations, but I ran through the NLTK book but it doesn't give the answer. I'm kind of new to Python.
I have a bunch of .txt
files and I want to be able to use the corpus functions that NLTK provides for the corpus nltk_data
.
I've tried PlaintextCorpusReader
but I couldn't get further than:
>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()
How do I segment the newcorpus
sentences using punkt? I tried using the punkt functions but the punkt functions couldn't read PlaintextCorpusReader
class?
Can you also lead me to how I can write the segmented data into text files?
I think the PlaintextCorpusReader
already segments the input with a punkt tokenizer, at least if your input language is english.
PlainTextCorpusReader's constructor
def __init__(self, root, fileids,
word_tokenizer=WordPunctTokenizer(),
sent_tokenizer=nltk.data.LazyLoader(
'tokenizers/punkt/english.pickle'),
para_block_reader=read_blankline_block,
encoding='utf8'):
You can pass the reader a word and sentence tokenizer, but for the latter the default already is nltk.data.LazyLoader('tokenizers/punkt/english.pickle')
.
For a single string, a tokenizer would be used as follows (explained here, see section 5 for punkt tokenizer).
>>> import nltk.data
>>> text = """
... Punkt knows that the periods in Mr. Smith and Johann S. Bach
... do not mark sentence boundaries. And sometimes sentences
... can start with non-capitalized words. i is a good variable
... name.
... """
>>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>> tokenizer.tokenize(text.strip())
这篇关于使用 NLTK 创建新语料库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!