Stanford CoreNLP-线程"main"中的异常; java.lang.OutOfMemoryError:Java堆空间 [英] Stanford CoreNLP - Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

查看:251
本文介绍了Stanford CoreNLP-线程"main"中的异常; java.lang.OutOfMemoryError:Java堆空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试运行此网站上可用的简单程序 https://stanfordnlp.github. io/CoreNLP/api.html
我的程序

I am trying to run simple program available on this website https://stanfordnlp.github.io/CoreNLP/api.html
My Program

import java.io.BufferedReader;  
import java.io.BufferedWriter;  
import java.io.FileNotFoundException;  
import java.io.FileReader;  
import java.io.FileWriter;  
import java.io.IOException;  
import java.io.PrintWriter;  
import java.util.List;  
import java.util.Properties;  

import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;  
import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;  
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;  
import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;  
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;  
import edu.stanford.nlp.ling.CoreLabel;  
import edu.stanford.nlp.pipeline.Annotation;  
import edu.stanford.nlp.pipeline.StanfordCoreNLP;  
import edu.stanford.nlp.util.CoreMap;  

public class StanfordClass {

    public static void main(String[] args) throws Exception {
     Properties props = new Properties();
      props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse");

        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

        String text = "What is the Weather in Mumbai right now?";
         Annotation document = new Annotation(text);
          pipeline.annotate(document);

        List<CoreMap> sentences = document.get(SentencesAnnotation.class);

       for(CoreMap sentence: sentences) {
          // traversing the words in the current sentence
          // a CoreLabel is a CoreMap with additional token-specific methods
          for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
            // this is the text of the token
            String word = token.get(TextAnnotation.class);
            // this is the POS tag of the token
            String pos = token.get(PartOfSpeechAnnotation.class);
            // this is the NER label of the token
            String ne = token.get(NamedEntityTagAnnotation.class);

            System.out.println(String.format("Print: word: [%s] pos: [%s] ne: [%s]",word, pos, ne));
          }
        }
    }
}  

但是在线程"main"中获取异常java.lang.OutOfMemoryError:Java堆空间

But getting Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

我尝试过的事情
1.如果我从上述代码中删除了ner(命名实体识别器)属性,即 props.setProperty("annotators","tokenize,ssplit,pos,lemma,parse");
然后代码可以正常运行.
2.但是我需要使用ner(命名为实体识别器),因此我将eclipse.ini文件中的堆大小增加到1g,并确保该大小足以容纳该程序,并且在这种情况下,请确保堆大小不是问题.我认为有些东西丢失了,但是没有得到.

What I tried
1. if I remove ner (named entity recognizer) property from above code i.e. props.setProperty("annotators", "tokenize, ssplit, pos, lemma, parse");
then the code runs fine.
2.but I required ner(named entity recognizer) hence I increase heap size in eclipse.ini file up to 1g and sure that this much size is far enough for this program and also sure that heap size is not the problem in this case. I think something is missing but not getting that.

推荐答案

经过大量搜索后,这里给出了答案 使用Stanford CoreNLP

After lots of searches gets answer here Using Stanford CoreNLP

使用以下答案:-
1.Windows->首选项
2.Java->已安装的JRE
3.选择JRE,然后单击Edit
4.在默认VM参数字段上,键入"-Xmx1024M". (或您的内存偏好设置,对于1GB的内存为1024)
5.单击完成或确定.

Use following answer:-
1.Windows -> Preferences
2.Java -> Installed JREs
3.Select the JRE and click on Edit
4.On the default VM arguments field, type in "-Xmx1024M". (or your memory preference, for 1GB of ram it is 1024)
5.Click on finish or OK.

这篇关于Stanford CoreNLP-线程"main"中的异常; java.lang.OutOfMemoryError:Java堆空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆