数据导入期间超过了Fuseki GC开销限制 [英] Fuseki GC overhead limit exceeded during data import

查看:122
本文介绍了数据导入期间超过了Fuseki GC开销限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在启动时将LinkedMDB(6.1m三元组)导入本地版本的jena-fuseki:

I'm trying to import LinkedMDB (6.1m triples) into my local version of jena-fuseki at startup:

/path/to/fuseki-server --file=/path/to/linkedmdb.nt /ds

运行一分钟,然后死于以下错误:

and that runs for a minute, then dies with the following error:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at com.hp.hpl.jena.graph.Node$3.construct(Node.java:318)
    at com.hp.hpl.jena.graph.Node.create(Node.java:344)
    at com.hp.hpl.jena.graph.NodeFactory.createURI(NodeFactory.java:48)
    at org.apache.jena.riot.system.RiotLib.createIRIorBNode(RiotLib.java:80)
    at org.apache.jena.riot.system.ParserProfileBase.createURI(ParserProfileBase.java:107)
    at org.apache.jena.riot.system.ParserProfileBase.create(ParserProfileBase.java:156)
    at org.apache.jena.riot.lang.LangNTriples.tokenAsNode(LangNTriples.java:97)
    at org.apache.jena.riot.lang.LangNTriples.parseOne(LangNTriples.java:90)
    at org.apache.jena.riot.lang.LangNTriples.runParser(LangNTriples.java:54)
    at org.apache.jena.riot.lang.LangBase.parse(LangBase.java:42)
    at org.apache.jena.riot.RDFParserRegistry$ReaderRIOTFactoryImpl$1.read(RDFParserRegistry.java:142)
    at org.apache.jena.riot.RDFDataMgr.process(RDFDataMgr.java:818)
    at org.apache.jena.riot.RDFDataMgr.parse(RDFDataMgr.java:679)
    at org.apache.jena.riot.RDFDataMgr.read(RDFDataMgr.java:211)
    at org.apache.jena.riot.RDFDataMgr.read(RDFDataMgr.java:104)
    at org.apache.jena.fuseki.FusekiCmd.processModulesAndArgs(FusekiCmd.java:251)
    at arq.cmdline.CmdArgModule.process(CmdArgModule.java:51)
    at arq.cmdline.CmdMain.mainMethod(CmdMain.java:100)
    at arq.cmdline.CmdMain.mainRun(CmdMain.java:63)
    at arq.cmdline.CmdMain.mainRun(CmdMain.java:50)
    at org.apache.jena.fuseki.FusekiCmd.main(FusekiCmd.java:141)

有没有一种方法可以提高内存限制或以较低强度的方式导入数据?

Is there a way that I can bump up the memory limit or import the data in less intensive way?

为了比较起见,当我使用一百万个三重源文件时,它在不到10秒的时间内导入.

For comparison's sake, when I used a 1million triple source file, it imports in less than 10 seconds.

推荐答案

增加堆内存,java -Xmx2048M -jar fuseki-sys.jar ......

使用编辑器打开fuseki-server,您会发现JVM_ARGS=${JVM_ARGS:--Xmx1200M}行将其修改为JVM_ARGS=${JVM_ARGS:--Xmx2048M}

open fuseki-server with an editor you'll find the line JVM_ARGS=${JVM_ARGS:--Xmx1200M} modify it to JVM_ARGS=${JVM_ARGS:--Xmx2048M}

这篇关于数据导入期间超过了Fuseki GC开销限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆