如何使用 BertForMaskedLM 或 BertModel 计算句子的困惑度? [英] How do I use BertForMaskedLM or BertModel to calculate perplexity of a sentence?

查看:23
本文介绍了如何使用 BertForMaskedLM 或 BertModel 计算句子的困惑度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想用 BertForMaskedLM 或 BertModel 来计算句子的困惑度,所以我写了这样的代码:

I want to use BertForMaskedLM or BertModel to calculate perplexity of a sentence, so I write code like this:

import numpy as np
import torch
import torch.nn as nn
from transformers import BertTokenizer, BertForMaskedLM
# Load pre-trained model (weights)
with torch.no_grad():
    model = BertForMaskedLM.from_pretrained('hfl/chinese-bert-wwm-ext')
    model.eval()
    # Load pre-trained model tokenizer (vocabulary)
    tokenizer = BertTokenizer.from_pretrained('hfl/chinese-bert-wwm-ext')
    sentence = "我不会忘记和你一起奋斗的时光。"
    tokenize_input = tokenizer.tokenize(sentence)
    tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
    sen_len = len(tokenize_input)
    sentence_loss = 0.

    for i, word in enumerate(tokenize_input):
        # add mask to i-th character of the sentence
        tokenize_input[i] = '[MASK]'
        mask_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])

        output = model(mask_input)

        prediction_scores = output[0]
        softmax = nn.Softmax(dim=0)
        ps = softmax(prediction_scores[0, i]).log()
        word_loss = ps[tensor_input[0, i]]
        sentence_loss += word_loss.item()

        tokenize_input[i] = word
    ppl = np.exp(-sentence_loss/sen_len)
    print(ppl)

我认为这段代码是对的,但我也注意到了 BertForMaskedLM 的参数masked_lm_labels,那么我可以使用这个参数来更轻松地计算句子的 PPL 吗?我知道 input_ids 参数是屏蔽输入, masked_lm_labels 参数是所需的输出.但是我无法理解它的输出损失的实际含义,它的代码是这样的:

I think this code is right, but I also notice BertForMaskedLM's paramaters masked_lm_labels, so could I use this paramaters to calculate PPL of a sentence easiler? I know the input_ids argument is the masked input, the masked_lm_labels argument is the desired output. But I couldn't understand the actual meaning of its output loss, its code like this:

if masked_lm_labels is not None:
    loss_fct = CrossEntropyLoss()  # -100 index = padding token
    masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), 
    masked_lm_labels.view(-1))
    outputs = (masked_lm_loss,) + outputs

推荐答案

是的,您可以使用参数 labels(或 masked_lm_labels,我认为参数名称在Huggingface Transformer 的版本,无论如何)来指定被屏蔽的标记位置,并使用 -100 忽略您不想包含在损失计算中的标记.例如,

Yes, you can use the parameter labels (or masked_lm_labels, I think the param name varies in versions of huggingface transformers, whatever) to specify the masked token position, and use -100 to ignore the tokens that you dont want to include in the loss computing. For example,

sentence='我爱你'
from transformers import BertTokenizer, BertForMaskedLM
import torch
import numpy as np

tokenizer = BertTokenizer(vocab_file='vocab.txt')
model = BertForMaskedLM.from_pretrained('bert-base-chinese')

tensor_input = tokenizer(sentence, return_tensors='pt')
# tensor([[ 101, 2769, 4263,  872,  102]])

repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
# tensor([[ 101, 2769, 4263,  872,  102],
#         [ 101, 2769, 4263,  872,  102],
#         [ 101, 2769, 4263,  872,  102]])

mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
# tensor([[0., 1., 0., 0., 0.],
#         [0., 0., 1., 0., 0.],
#         [0., 0., 0., 1., 0.]])

masked_input = repeat_input.masked_fill(mask == 1, 103)
# tensor([[ 101,  103, 4263,  872,  102],
#         [ 101, 2769,  103,  872,  102],
#         [ 101, 2769, 4263,  103,  102]])

labels = repeat_input.masked_fill( masked_input != 103, -100)
# tensor([[-100, 2769, -100, -100, -100],
#         [-100, -100, 4263, -100, -100],
#         [-100, -100, -100,  872, -100]])

loss,_ = model(masked_input, masked_lm_labels=labels)

score = np.exp(loss.item())

功能:

def score(model, tokenizer, sentence,  mask_token_id=103):
  tensor_input = tokenizer.encode(sentence, return_tensors='pt')
  repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
  mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
  masked_input = repeat_input.masked_fill(mask == 1, 103)
  labels = repeat_input.masked_fill( masked_input != 103, -100)
  loss,_ = model(masked_input, masked_lm_labels=labels)
  result = np.exp(loss.item())
  return result

score(model, tokenizer, '我爱你') # returns 45.63794545581973

这篇关于如何使用 BertForMaskedLM 或 BertModel 计算句子的困惑度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆