斯坦福依赖性解析器设置和NLTK [英] Stanford Dependency Parser Setup and NLTK

查看:320
本文介绍了斯坦福依赖性解析器设置和NLTK的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,由于危险89对前一篇帖子的回答, Stanford,我得到了标准"斯坦福解析器的支持解析器和NLTK .

So I got the "standard" Stanford Parser to work thanks to danger89's answers to this previous post, Stanford Parser and NLTK.

但是,我现在正在尝试使依赖性解析器正常工作,并且似乎上一个链接中突出显示的方法不再起作用.这是我的代码:

However, I am now trying to get the dependency parser to work and it seems the method highlighted in the previous link no longer works. Here is my code:

import nltk
import os
java_path = "C:\\Program Files\\Java\\jre1.8.0_51\\bin\\java.exe" 
os.environ['JAVAHOME'] = java_path


from nltk.parse import stanford
os.environ['STANFORD_PARSER'] = 'path/jar'
os.environ['STANFORD_MODELS'] = 'path/jar'
parser = stanford.StanfordDependencyParser(model_path="path/jar/englishPCFG.ser.gz")

sentences = parser.raw_parse_sents(nltk.sent_tokenize("The iPod is expensive but pretty."))

我收到以下错误: '模块'对象没有属性'StanfordDependencyParser'

I get the following error: 'module' object has no attribute 'StanfordDependencyParser'

我唯一更改的是"StanfordParser"中的"StanfordDependencyParser".有什么想法可以使它正常工作吗?

The only thing I changed was "StanfordDependencyParser" from "StanfordParser". Any ideas how I can get this to work?

我也通过导入Stanford Neural Dependency解析器来进行尝试,如此处的文档所示:

I also tried the Stanford Neural Dependency parser by importing it as shown in the documentation here: http://www.nltk.org/_modules/nltk/parse/stanford.html

这也不起作用.

NLTK的新手.预先感谢您提供的任何有益帮助.

Pretty new to NLTK. Thanks in advance for any helpful input.

推荐答案

StanfordDependencyParser API是自NLTK 3.1版以来创建的新类对象.

The StanfordDependencyParser API is a new class object created since NLTK version 3.1.

确保您可以通过pip获得最新的NLTK

Ensure that you have the latest NLTK available either through pip

pip install -U nltk

或通过您的linux软件包管理器,例如:

or through your linux package manager, e.g.:

sudo apt-get python-nltk

或在Windows中,下载 https://pypi.python.org/pypi/nltk 并安装,它应该会覆盖您以前的NLTK版本.

or in windows, download https://pypi.python.org/pypi/nltk and install and it should overwrite your previous NLTK version.

然后您可以使用文档中所示的API:

Then you can use the API as shown in the documentation:

from nltk.parse.stanford import StanfordDependencyParser
dep_parser=StanfordDependencyParser(model_path="edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz")
print [parse.tree() for parse in dep_parser.raw_parse("The quick brown fox jumps over the lazy dog.")]

[输出]:

[Tree('jumps', [Tree('fox', ['The', 'quick', 'brown']), Tree('dog', ['over', 'the', 'lazy'])])]

(注意:请确保您获得了jar的路径,并且os.environ正确,在Windows中为something\\something\\some\\path,在Unix中为something/something/some/path)

(Note: Make sure you get your path to jar and os.environ correct, in Windows, it's something\\something\\some\\path, in unix it's something/something/some/path)

另请参见 https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software#stanford-tagger-ner-tokenizer-and-parser ,当您需要TL; DR解决方案时,请参见 https://github.com/alvations/nltk_cli

See also https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software#stanford-tagger-ner-tokenizer-and-parser and when you need a TL;DR solution, see https://github.com/alvations/nltk_cli

这篇关于斯坦福依赖性解析器设置和NLTK的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆