如何将tensorflow模型部署到Azure ml工作台 [英] How to deploy a tensorflow model to azure ml workbench

查看:86
本文介绍了如何将tensorflow模型部署到Azure ml工作台的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Azure ML Workbench 进行二进制分类.到目前为止,一切正常,我具有很好的准确性,并且我希望将模型部署为Web服务进行推理.

I am using Azure ML Workbench to perform binary classification. So far, everything works fine, I having good accuracy, and I would like to deploy the model as a web service for inference.

我真的不知道从哪里开始:azure提供了此

I don't really know where to start : azure provides this doc, but the example uses sklearn and pickle, not tensorflow.

我什至不确定是否应该使用 tf.train.Saver() tf.saved_model_builder()保存和恢复模型.

I'm not even sure if I should save and restore the model with tf.train.Saver() or with tf.saved_model_builder().

如果任何人都有一个很好的例子,可以在azure ml工作台中使用香草张量流,那就太好了.

If anyone has a good example that use vanilla tensorflow in azure ml workbench, that'd be great.

推荐答案

好,所以对于任何想知道相同内容的人,我都找到了答案.通过使用.然后,我像这样编写init(),run()和load_graph()方法:

Ok, so for anyone wondering the same, I found the answer. Instead of using a pickle model, I saved my model as a protobuf, by following this. Then, I write the init(), run() and load_graph() method like so :

def init():
    global persistent_session, model, x, y, keep_prob, inputs_dc, prediction_dc
    #load the model and connect the inputs / outputs
    model = load_graph(os.path.join(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'], 'frozen_model.pb'))
    x = model.get_tensor_by_name('prefix/Placeholder:0')
    y = model.get_tensor_by_name('prefix/convNet/sample_prediction:0')
    keep_prob = model.get_tensor_by_name('prefix/Placeholder_3:0')
    persistent_session = tf.Session(graph=model)

# load the graph from protobuf file
def load_graph(frozen_graph_filename):
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def, name="prefix")
    return graph

# run the inference
def run(input_array):
    import json
    global clcf2, inputs_dc, prediction_dc
    try:  
        prediction = persistent_session.run(y, feed_dict={ x: input_array, keep_prob:1.0})
        print("prediction : ", prediction)
        inputs_dc.collect(input_array)
        prediction_dc.collect(prediction.tolist())
        return prediction
    except Exception as e:
        return (str(e))
    return json.dumps(str(prediction.tolist()))

可能需要一些清洁,但是行得通!

Probably needs some cleaning, but it works !

这篇关于如何将tensorflow模型部署到Azure ml工作台的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆