通过 RESTful API 部署 Tensorflow 模型的示例 [英] Example for Deploying a Tensorflow Model via a RESTful API
问题描述
是否有通过 RESTful API 部署 Tensorflow 模型的示例代码?我看到了命令行程序和移动应用程序的示例.是否有一个框架,或者人们只是加载模型并通过网络框架(如 Flask)公开预测方法以获取输入(例如通过 JSON)并返回响应?通过框架,我的意思是针对大量预测请求进行扩展.当然,由于模型是不可变的,我们可以启动预测服务器的多个实例并将其置于负载均衡器(如 HAProxy)之后.我的问题是,人们是为此使用某种框架还是从头开始做这件事,或者,这可能在 Tensorflow 中已经可用,而我没有注意到.
Is there any example code for deploying a Tensorflow Model via a RESTful API? I see examples for a command line program and for a mobile app. Is there a framework for this or people just load the model and expose the predict method via a web framework (like Flask)to take input (say via JSON) and return the response? By framework I mean scaling for large number of predict requests. Of course since the models are immutable we can launch multiple instances of our prediction server and put it behind a load balancer (like HAProxy). My question is, are people using some framework for this or doing this from scratch, or, maybe this is already available in Tensorflow and I have not noticed it.
推荐答案
https://github.com/sugyan/tensorflow-mnist 通过使用 Flask 和加载预训练模式(恢复)展示了一个简单的 restAPI 示例.
https://github.com/sugyan/tensorflow-mnist shows a simple restAPI example by using Flask and loading pre-trained mode (restore).
@app.route('/api/mnist', methods=['POST'])
def mnist():
input = ((255 - np.array(request.json, dtype=np.uint8)) / 255.0).reshape(1, 784)
output1 = simple(input)
output2 = convolutional(input)
return jsonify(results=[output1, output2])
另外,请参阅 https://tensorflow-mnist.herokuapp.com/ 上的在线演示.看来 API 已经够快了.
Also, see the online demo at https://tensorflow-mnist.herokuapp.com/. It seems the API is fast enough.
这篇关于通过 RESTful API 部署 Tensorflow 模型的示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!