使用自定义模型运行Stanford corenlp服务器 [英] Running Stanford corenlp server with custom models

查看:306
本文介绍了使用自定义模型运行Stanford corenlp服务器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经用Stanford corenlp训练了POS标记器和神经依赖性解析器.我可以通过命令行让它们工作,现在想通过服务器访问它们.

I've trained a POS tagger and neural dependency parser with Stanford corenlp. I can get them to work via command line, and now would like to access them via a server.

但是,服务器的文档没有说明使用定制模型.我检查了代码,没有找到任何明显的方法来提供配置文件.

However, the documentation for the server doesn't say anything about using custom models. I checked the code and didn't find any obvious way of supplying a configuration file.

任何想法如何做到这一点?我不需要所有注释器,只需要我训练过的注释器即可.

Any idea how to do this? I don't need all annotators, just the ones I trained.

推荐答案

是的,服务器应该(理论上)支持常规管道的所有功能. properties GET参数将转换为通常传递给StanfordCoreNLPProperties对象.因此,如果您希望服务器加载自定义模型,则可以通过以下方式调用它:

Yes, the server should (in theory) support all the functionality of the regular pipeline. The properties GET parameter is translated into the Properties object you would normally pass into StanfordCoreNLP. Therefore, if you'd like the server to load a custom model, you can just call it via, e.g.:

wget \
  --post-data 'the quick brown fox jumped over the lazy dog' \
  'localhost:9000/?properties={"parse.model": "/path/to/model/on/server/computer", "annotators": "tokenize,ssplit,pos", "outputFormat": "json"}' -O -

请注意,尽管如此,服务器之后不会再垃圾收集此模型,因此,如果加载过多的模型,很有可能会遇到内存不足的错误...

Note that the server won't garbage-collect this model afterwards though, so if you load too many models there's a good chance you'll run into out-of-memory errors...

这篇关于使用自定义模型运行Stanford corenlp服务器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆