使用Stanford CoreNLP解析共享 - 无法加载解析器模型 [英] Resolve coreference using Stanford CoreNLP - unable to load parser model

查看:129
本文介绍了使用Stanford CoreNLP解析共享 - 无法加载解析器模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想做一个非常简单的工作:给定一个包含代词的字符串,我想解决它们。

I want to do a very simple job: given a string containing pronouns, I want to resolve them.

例如,我想把句子改为玛丽有一只小羊羔。她很可爱。在玛丽有一只小羊羔。玛丽很可爱。。

for example, I want to turn the sentence "Mary has a little lamb. She is cute." in "Mary has a little lamb. Mary is cute.".

我试图使用斯坦福CoreNLP。但是,我似乎无法启动解析器。我已经使用Eclipse在我的项目中导入了所有包含的jar,并且我已经为JVM分配了3GB(-Xmx3g)。

I have tried to use Stanford CoreNLP. However, I seem unable to get the parser to start. I have imported all the included jars in my project using Eclipse, and I have allocated 3GB to the JVM (-Xmx3g).

错误非常尴尬:


线程main中的异常java.lang.NoSuchMethodError:
edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String;)Ledu / stanford / nlp / parser / lexparser / LexicalizedParser;

Exception in thread "main" java.lang.NoSuchMethodError: edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava/lang/String;[Ljava/lang/String;)Ledu/stanford/nlp/parser/lexparser/LexicalizedParser;

我不喜欢我不明白L的来源,我认为这是我问题的根源......这很奇怪。我试图进入源文件,但没有错误的引用。

I don't understand where that L comes from, I think it is the root of my problem... This is rather weird. I have tried to get inside the source files, but there is no wrong reference there.

代码:

import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefChainAnnotation;
import edu.stanford.nlp.dcoref.CorefCoreAnnotations.CorefGraphAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.dcoref.CorefChain;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.IntTuple;
import edu.stanford.nlp.util.Pair;
import edu.stanford.nlp.util.Timing;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import java.util.Properties;

public class Coref {

/**
 * @param args the command line arguments
 */
public static void main(String[] args) throws IOException, ClassNotFoundException {
    // creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
    Properties props = new Properties();
    props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

    // read some text in the text variable
    String text = "Mary has a little lamb. She is very cute."; // Add your text here!

    // create an empty Annotation just with the given text
    Annotation document = new Annotation(text);

    // run all Annotators on this text
    pipeline.annotate(document);

    // these are all the sentences in this document
    // a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
    List<CoreMap> sentences = document.get(SentencesAnnotation.class);

    for(CoreMap sentence: sentences) {
      // traversing the words in the current sentence
      // a CoreLabel is a CoreMap with additional token-specific methods
      for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
        // this is the text of the token
        String word = token.get(TextAnnotation.class);
        // this is the POS tag of the token
        String pos = token.get(PartOfSpeechAnnotation.class);
        // this is the NER label of the token
        String ne = token.get(NamedEntityTagAnnotation.class);       
      }

      // this is the parse tree of the current sentence
      Tree tree = sentence.get(TreeAnnotation.class);
      System.out.println(tree);

      // this is the Stanford dependency graph of the current sentence
      SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
    }

    // This is the coreference link graph
    // Each chain stores a set of mentions that link to each other,
    // along with a method for getting the most representative mention
    // Both sentence and token offsets start at 1!
    Map<Integer, CorefChain> graph = 
      document.get(CorefChainAnnotation.class);
    System.out.println(graph);
  }
}

完整堆栈跟踪:


添加注释器tokenize
添加注释器ssplit
添加注释器pos
加载POS模型[edu / stanford / nlp / models / pos- tagger / english-left3words / english-left3words-distsim.tagger] ...从训练有素的标记器中加载默认属性edu / stanford / nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger
从edu / stanford读取POS标记模型/ nlp / models / pos-tagger / english-left3words / english-left3words-distsim.tagger ...完成[2.1秒]。
完成[2.2秒]。
添加注释引理
添加注释器ner
从edu / stanford / nlp / models / ner / english.all.3class.distsim.crf.ser.gz加载分类器...完成[4.0秒]。
从edu / stanford / nlp / models / ner / english.muc.distsim.crf.ser.gz加载分类器...完成[3.0秒]。
从edu / stanford / nlp / models / ner / english.conll.distsim.crf.ser.gz加载分类器...完成[3.3秒]。
添加注释器解析
线程main中的异常java.lang.NoSuchMethodError:edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava / lang / String; [Ljava / lang / String; )乐都/斯坦福/ NLP /解析器/ lexparser / LexicalizedParser;
at edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115)
at edu.stanford.nlp.pipeline.ParserAnnotator。(ParserAnnotator.java:64)
at edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create(StanfordCoreNLP.java:603)
at edu.stanford.nlp.pipeline.StanfordCoreNLP $ 12.create(StanfordCoreNLP.java:585)
at edu。 stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329)
at edu.stanford.nlp。 pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:196)
at edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP.java:186)
at edu.stanford.nlp.pipeline.StanfordCoreNLP。(StanfordCoreNLP .java:178)
在Coref.main(Coref.java:41)

Adding annotator tokenize Adding annotator ssplit Adding annotator pos Loading POS Model [edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger] ... Loading default properties from trained tagger edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [2.1 sec]. done [2.2 sec]. Adding annotator lemma Adding annotator ner Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [4.0 sec]. Loading classifier from edu/stanford/nlp/models/ner/english.muc.distsim.crf.ser.gz ... done [3.0 sec]. Loading classifier from edu/stanford/nlp/models/ner/english.conll.distsim.crf.ser.gz ... done [3.3 sec]. Adding annotator parse Exception in thread "main" java.lang.NoSuchMethodError: edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(Ljava/lang/String;[Ljava/lang/String;)Ledu/stanford/nlp/parser/lexparser/LexicalizedParser; at edu.stanford.nlp.pipeline.ParserAnnotator.loadModel(ParserAnnotator.java:115) at edu.stanford.nlp.pipeline.ParserAnnotator.(ParserAnnotator.java:64) at edu.stanford.nlp.pipeline.StanfordCoreNLP$12.create(StanfordCoreNLP.java:603) at edu.stanford.nlp.pipeline.StanfordCoreNLP$12.create(StanfordCoreNLP.java:585) at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:62) at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:329) at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:196) at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:186) at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:178) at Coref.main(Coref.java:41)


推荐答案

是的,从Java 1.0开始,L只是一个奇怪的Sun事件。

Yes, the L is just a bizarre Sun thing from ever since Java 1.0.

LexicalizedParser.lo adModel(String,String ...)是添加到解析器的新方法,该方法未找到。我怀疑这意味着你的类路径中有另一个版本的解析器正在被使用。

LexicalizedParser.loadModel(String, String ...) is a new method added to the parser, which is not being found. I suspect this means that you have another version of the parser in your classpath which is being used instead.

试试这个:在任何IDE外面的shell中,给出这些命令(适当地给出了stanford-corenlp的路径,并改变:to;如果在Windows上:

Try this: at the shell outside of any IDE, give these commands (giving the path to stanford-corenlp appropriately, and changing : to ; if on Windows:

javac -cp ".:stanford-corenlp-2012-04-09/*" Coref.java
java -mx3g -cp ".:stanford-corenlp-2012-04-09/*" Coref

解析器加载并且您的代码正确运行 - 只需要添加一些打印语句,这样您就可以看到它做了什么: - )。

The parser loads and your code runs correctly for me - just need to add some print statements so you can see what it has done :-).

这篇关于使用Stanford CoreNLP解析共享 - 无法加载解析器模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆