NLTK中的Stanford NER无法正确标记多个句子-Python [英] Stanford NER in NLTK not tagging multiple sentences correctly - Python

查看:104
本文介绍了NLTK中的Stanford NER无法正确标记多个句子-Python的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个函数,可以使用Stanford NER返回给定文本主体中的命名实体.

I have a function which returns the named entities in a given body of text, using the Stanford NER.

def get_named_entities(text):
    load_ner_files()

    print text[:100] # to show that the text is fine
    text_split = text.split()        
    print text_split # to show the split is working fine
    result = "named entities = ", st.tag(text_split)
    return result

我正在使用报纸Python包从URL中加载文本.

I am loading in the text from a url using the newspaper Python package.

def get_page_text():
    url = "https://aeon.co/essays/elon-musk-puts-his-case-for-a-multi-planet-civilisation"
    page = Article(url)
    page.download()
    page.parse() 
    return unicodedata.normalize('NFKD', page.text).encode('ascii', 'ignore')

但是,当我运行该函数时,会得到以下输出:

However, when I run the function I get the following output:

['Fuck', 'Earth!', 'Elon', 'Musk', 'said', 'to', 'me,', 'laughing.', 'Who', 'cares', 'about', 'Earth?'......... (continued)
named entities = [('Fuck', 'O'), ('Earth', 'O'), ('!', 'O')]

所以我的问题是,为什么只对前三个单词加标签?

So my question is, why are only the first three words being tagged?

推荐答案

假定已正确设置了NLTK v3.2,请参见

Assuming that one has setup the NLTK v3.2 properly, see

  • Stanford Parser and NLTK
  • https://gist.github.com/alvations/e1df0ba227e542955a8a
  • https://gist.github.com/alvations/0ed8641d7d2e1941b9f9

TL; DR :

pip install -U nltk

conda update nltk 


设置NLTK和Stanford Tools之后(记住要设置环境变量):


After setting up NLTK and Stanford Tools (remember to set environment variables):

import time
import urllib.request
from itertools import chain

from bs4 import BeautifulSoup
from nltk import word_tokenize, sent_tokenize
from nltk.tag import StanfordNERTagger

class Article:
    def __init__(self, url, encoding='utf8'):
        self.url = url
        self.encoding='utf8'
        self.text = self.fetch_url_text()
        self.process_text()

    def fetch_url_text(self):
        response = urllib.request.urlopen(self.url)
        self.data = response.read().decode(self.encoding)
        self.bsoup = BeautifulSoup(self.data, 'html.parser')
        return '\n'.join([paragraph.text for paragraph 
                            in self.bsoup.find_all('p')])

    def process_text(self):
        self.paragraphs = [sent_tokenize(p.strip()) 
                            for p in self.text.split('\n') if p]
        _sents = list(chain(*self.paragraphs))
        self.sents = [word_tokenize(sent) for sent in _sents]
        self.words = list(chain(*self.sents))


url = 'https://aeon.co/essays/elon-musk-puts-his-case-for-a-multi-planet-civilisation'

a1 = Article(url)
three_sentences = a1.sents[20:23]

st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz')


# Tag multiple sentences at one go.
start = time.time()
tagged_sents = st.tag_sents(three_sentences)
print ("Tagging took:", time.time() - start)
print (tagged_sents, end="\n\n")

for sent in tagged_sents:
    print (sent)
print()

# (Much slower) Tagging sentences one at the time and 
# Stanford NER is refired every time.
start = time.time()
tagged_sents = [st.tag(sent) for sent in three_sentences]
print ("Tagging took:", time.time() - start)
for sent in tagged_sents:
    print (sent)
print()

[输出]:

Tagging took: 2.537247657775879
[[('Musk', 'PERSON'), ('was', 'O'), ('laughing', 'O'), ('because', 'O'), ('he', 'O'), ('was', 'O'), ('joking', 'O'), (':', 'O'), ('he', 'O'), ('cares', 'O'), ('a', 'O'), ('great', 'O'), ('deal', 'O'), ('about', 'O'), ('Earth', 'LOCATION'), ('.', 'O')], [('When', 'O'), ('he', 'O'), ('is', 'O'), ('not', 'O'), ('here', 'O'), ('at', 'O'), ('SpaceX', 'ORGANIZATION'), (',', 'O'), ('he', 'O'), ('is', 'O'), ('running', 'O'), ('an', 'O'), ('electric', 'O'), ('car', 'O'), ('company', 'O'), ('.', 'O')], [('But', 'O'), ('this', 'O'), ('is', 'O'), ('his', 'O'), ('manner', 'O'), ('.', 'O')]]

[('Musk', 'PERSON'), ('was', 'O'), ('laughing', 'O'), ('because', 'O'), ('he', 'O'), ('was', 'O'), ('joking', 'O'), (':', 'O'), ('he', 'O'), ('cares', 'O'), ('a', 'O'), ('great', 'O'), ('deal', 'O'), ('about', 'O'), ('Earth', 'LOCATION'), ('.', 'O')]
[('When', 'O'), ('he', 'O'), ('is', 'O'), ('not', 'O'), ('here', 'O'), ('at', 'O'), ('SpaceX', 'ORGANIZATION'), (',', 'O'), ('he', 'O'), ('is', 'O'), ('running', 'O'), ('an', 'O'), ('electric', 'O'), ('car', 'O'), ('company', 'O'), ('.', 'O')]
[('But', 'O'), ('this', 'O'), ('is', 'O'), ('his', 'O'), ('manner', 'O'), ('.', 'O')]

Tagging took: 7.375355243682861
[('Musk', 'PERSON'), ('was', 'O'), ('laughing', 'O'), ('because', 'O'), ('he', 'O'), ('was', 'O'), ('joking', 'O'), (':', 'O'), ('he', 'O'), ('cares', 'O'), ('a', 'O'), ('great', 'O'), ('deal', 'O'), ('about', 'O'), ('Earth', 'LOCATION'), ('.', 'O')]
[('When', 'O'), ('he', 'O'), ('is', 'O'), ('not', 'O'), ('here', 'O'), ('at', 'O'), ('SpaceX', 'ORGANIZATION'), (',', 'O'), ('he', 'O'), ('is', 'O'), ('running', 'O'), ('an', 'O'), ('electric', 'O'), ('car', 'O'), ('company', 'O'), ('.', 'O')]
[('But', 'O'), ('this', 'O'), ('is', 'O'), ('his', 'O'), ('manner', 'O'), ('.', 'O')]

这篇关于NLTK中的Stanford NER无法正确标记多个句子-Python的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆