如何阻止 BERT 将特定单词分解成词块 [英] How to stop BERT from breaking apart specific words into word-piece

查看:71
本文介绍了如何阻止 BERT 将特定单词分解成词块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用预训练的 BERT 模型将文本标记为有意义的标记.然而,文本有很多特定的词,我不希望 BERT 模型将它们分解成词块.有什么解决办法吗?例如:

I am using a pre-trained BERT model to tokenize a text into meaningful tokens. However, the text has many specific words and I don't want BERT model to break them into word-pieces. Is there any solution to it? For example:

tokenizer = BertTokenizer('bert-base-uncased-vocab.txt')
tokens = tokenizer.tokenize("metastasis")

像这样创建令牌:

['meta', '##sta', '##sis']

但是,我想将整个单词保留为一个标记,如下所示:

However, I want to keep the whole words as one token, like this:

['metastasis']

推荐答案

您可以自由地向现有的预训练分词器添加新标记,但随后您需要使用改进的分词器(额外标记)来训练您的模型.

You are free to add new tokens to the existing pretrained tokenizer, but then you need to train your model with the improved tokenizer (extra tokens).

示例:

from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
v = tokenizer.get_vocab()
print(len(v))
tokenizer.add_tokens(['whatever', 'underdog'])
v = tokenizer.get_vocab()
print(len(v))

如果令牌已经存在,例如随便",则不会添加.

If token already exists like 'whatever' it will not be added.

输出:

30522
30523

这篇关于如何阻止 BERT 将特定单词分解成词块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆