使用 BERT (TF 1.x) 保存的模型执行推理 [英] Performing inference with a BERT (TF 1.x) saved model

查看:27
本文介绍了使用 BERT (TF 1.x) 保存的模型执行推理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我被困在一行代码上,结果整个周末都被搁置在一个项目上.

I'm stuck on one line of code and have been stalled on a project all weekend as a result.

我正在做一个使用 BERT 进行句子分类的项目.我已经成功地训练了模型,我可以使用 run_classifier.py 中的示例代码测试结果.

I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py.

我可以使用这个示例代码导出模型(已经反复转发,所以我相信它适合这个模型):

I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model):

def export(self):
  def serving_input_fn():
    label_ids = tf.placeholder(tf.int32, [None], name='label_ids')
    input_ids = tf.placeholder(tf.int32, [None, self.max_seq_length], name='input_ids')
    input_mask = tf.placeholder(tf.int32, [None, self.max_seq_length], name='input_mask')
    segment_ids = tf.placeholder(tf.int32, [None, self.max_seq_length], name='segment_ids')
    input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn({
        'label_ids': label_ids, 'input_ids': input_ids,
        'input_mask': input_mask, 'segment_ids': segment_ids})()
    return input_fn
  self.estimator._export_to_tpu = False
  self.estimator.export_savedmodel(self.output_dir, serving_input_fn)

我还可以加载导出的估算器(导出函数将导出的模型保存到标有时间戳的子目录中):

I can also load the exported estimator (where the export function saves the exported model into a subdirectory labeled with a timestamp):

predict_fn = predictor.from_saved_model(self.output_dir + timestamp_number)

但是,在我的一生中,我无法弄清楚为 predict_fn 提供什么作为推理的输入.这是我目前最好的代码:

However, for the life of me, I cannot figure out what to provide to predict_fn as input for inference. Here is my best code at the moment:

def predict(self):
  input = 'Test input'
  guid = 'predict-0'
  text_a = tokenization.convert_to_unicode(input)
  label = self.label_list[0]
  examples = [InputExample(guid=guid, text_a=text_a, text_b=None, label=label)]
  features = convert_examples_to_features(examples, self.label_list,
    self.max_seq_length, self.tokenizer)
  predict_input_fn = input_fn_builder(features, self.max_seq_length, False)
  predict_fn = predictor.from_saved_model(self.output_dir + timestamp_number)
  result = predict_fn(predict_input_fn)       # this generates an error
  print(result)

我提供给 predict_fn 的内容似乎并不重要:示例数组、特征数组、predict_input_fn 函数.显然,predict_fn 需要某种类型的字典 - 但我尝试过的每一件事都会由于张量不匹配或其他错误而产生异常,这些错误通常意味着:输入错误.

It doesn't seem to matter what I provide to predict_fn: the examples array, the features array, the predict_input_fn function. Clearly, predict_fn wants a dictionary of some type - but every single thing that I've tried generates an exception due to a tensor mismatch or other errors that generally mean: bad input.

我假设 from_saved_model 函数需要与模型测试函数相同类型的输入 - 显然,情况并非如此.

I presumed that the from_saved_model function wants the same sort of input as the model test function - apparently, that's not the case.

似乎很多人已经问过这个问题——我如何使用导出的 BERT TensorFlow 模型进行推理?"- 并没有得到任何答复:

It seems that lots of people have asked this very question - "how do I use an exported BERT TensorFlow model for inference?" - and have gotten no answers:

线程 #1

线程#2

线程 #3

线程 #4

有什么帮助吗?提前致谢.

Any help? Thanks in advance.

推荐答案

感谢您的这篇博文.您的 serving_input_fn 是我遗漏的部分!您的 predict 函数需要更改为直接提供特征字典,而不是使用 predict_input_fn:

Thank you for this post. Your serving_input_fn was the piece I was missing! Your predict function needs to be changed to feed the features dict directly, rather than use the predict_input_fn:

def predict(sentences):
    labels = [0, 1]
    input_examples = [
        run_classifier.InputExample(
            guid="",
            text_a = x,
            text_b = None,
            label = 0
        ) for x in sentences] # here, "" is just a dummy label
    input_features = run_classifier.convert_examples_to_features(
        input_examples, labels, MAX_SEQ_LEN, tokenizer
    )
    # this is where pred_input_fn is replaced
    all_input_ids = []
    all_input_mask = []
    all_segment_ids = []
    all_label_ids = []

    for feature in input_features:
        all_input_ids.append(feature.input_ids)
        all_input_mask.append(feature.input_mask)
        all_segment_ids.append(feature.segment_ids)
        all_label_ids.append(feature.label_id)
    pred_dict = {
        'input_ids': all_input_ids,
        'input_mask': all_input_mask,
        'segment_ids': all_segment_ids,
        'label_ids': all_label_ids
    }
    predict_fn = predictor.from_saved_model('../testing/1589418540')
    result = predict_fn(pred_dict)
    print(result)

pred_sentences = [
  "That movie was absolutely awful",
  "The acting was a bit lacking",
  "The film was creative and surprising",
  "Absolutely fantastic!",
]
predict(pred_sentences)
{'probabilities': array([[-0.3579178 , -1.2010787 ],
       [-0.36648935, -1.1814401 ],
       [-0.30407643, -1.3386648 ],
       [-0.45970002, -0.9982413 ],
       [-0.36113673, -1.1936386 ],
       [-0.36672896, -1.1808994 ]], dtype=float32), 'labels': array([0, 0, 0, 0, 0, 0])}

但是,为 pred_sentences 中的句子返回的概率与我得到的概率不匹配 estimator.predict(predict_input_fn) 其中 estimator 是在同一个(python)会话中使用的微调模型.例如, [-0.27276006, -1.4324446 ] 使用 estimator 与 [-0.26713806, -1.4505868 ] 使用 predictor.

However, the probabilities returned for sentences in pred_sentences do not match the probabilities I get use estimator.predict(predict_input_fn) where estimator is the fine-tuned model being used within the same (python) session. For example, [-0.27276006, -1.4324446 ] using estimator vs [-0.26713806, -1.4505868 ] using predictor.

这篇关于使用 BERT (TF 1.x) 保存的模型执行推理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆