如何使用 tensorflow 服务使 tensorflow hub 嵌入可用? [英] How to make the tensorflow hub embeddings servable using tensorflow serving?

查看:43
本文介绍了如何使用 tensorflow 服务使 tensorflow hub 嵌入可用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用来自 tensorflow hub 的嵌入模块作为服务.我是张量流的新手.目前,我使用 Universal Sentence Encoder 嵌入作为查找将句子转换为嵌入然后使用这些嵌入来找到与另一个句子的相似性.

I am trying use an embeddings module from tensorflow hub as servable. I am new to tensorflow. Currently, I am using Universal Sentence Encoder embeddings as a lookup to convert sentences to embeddings and then using those embeddings to find a similarity to another sentence.

我当前将句子转换为嵌入的代码是:

My current code to convert sentences into embeddings is:

with tf.Session() as session:
  session.run([tf.global_variables_initializer(), tf.tables_initializer()])
  sen_embeddings = session.run(self.embed(prepared_text))

Prepared_text 是一个句子列表.我如何采用此模型并使其成为可服务的?

Prepared_text is a list of sentences. How do I take this model and make it a servable?

推荐答案

现在您可能需要手动执行此操作.这是我的解决方案,类似于之前的答案,但更通用 - 展示如何在不猜测输入参数的情况下使用任何其他模块,以及通过验证和使用进行扩展:

Right now you probably need to do this by hand. Here is my solution, similar to previous answer but more general - show how to use any other module without guessing input parameters, as well as extended with verification and usage:

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.saved_model import simple_save

export_dir = "/tmp/tfserving/universal_encoder/00000001"
with tf.Session(graph=tf.Graph()) as sess:
    module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder/2") 
    input_params = module.get_input_info_dict()
    # take a look at what tensor does the model accepts - 'text' is input tensor name

    text_input = tf.placeholder(name='text', dtype=input_params['text'].dtype, 
        shape=input_params['text'].get_shape())
    sess.run([tf.global_variables_initializer(), tf.tables_initializer()])

    embeddings = module(text_input)

    simple_save(sess,
        export_dir,
        inputs={'text': text_input},
        outputs={'embeddings': embeddings},
        legacy_init_op=tf.tables_initializer())

感谢 module.get_input_info_dict() 你知道你需要传递给模型的张量名称 - 你使用这个名称作为 inputs={}simple_save 方法.

Thanks to module.get_input_info_dict() you know what tensor names you need to pass to the model - you use this name as a key for inputs={} in simple_save method.

请记住,要为模型提供服务,它需要位于以 version 结尾的目录路径中,这就是为什么 '00000001'saved_model.pb 所在的最后一个路径的原因.

Remember that to serve the model it needs to be in directory path ending with version, that's why '00000001' is the last path in which saved_model.pb resides.

导出模块后,查看模型是否正确导出以供服务的最快方法是使用 saved_model_cli API:

After exporting your module, quickest way to see if your model is exported properly for serving is to use saved_model_cli API:

saved_model_cli run --dir /tmp/tfserving/universal_encoder/00000001 --tag_set serve --signature_def serving_default --input_exprs 'text=["what this is"]'

从 docker 为模型提供服务:

To serve the model from docker:

docker pull tensorflow/serving  
docker run -p 8501:8501 -v /tmp/tfserving/universal_encoder:/models/universal_encoder -e MODEL_NAME=universal_encoder -t tensorflow/serving                                                                                           

这篇关于如何使用 tensorflow 服务使 tensorflow hub 嵌入可用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆