节省和重新装入紧凑面微调变压器 [英] Saving and reload huggingface fine-tuned transformer

查看:20
本文介绍了节省和重新装入紧凑面微调变压器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试重新加载经过微调的DistilBertForTokenClass模型。我使用的是Translers 3.4.0和pytorch版本1.6.0+cu101。在使用训练器训练了下载的模型之后,我用traine.saveModel()保存了模型,在排除故障时,我通过模型保存到了一个不同的目录。我正在使用Google Colab,并将模型保存到我的Google Drive中。在测试了模型之后,我也在我的测试中对模型进行了评估,获得了很好的结果,但是,当我返回笔记本电脑(或工厂重启CoLab笔记本电脑)并尝试重新加载模型时,预测结果很糟糕。检查目录后,在那里可以看到config.json文件和pytorch_mode.bin文件。以下是完整代码。

from transformers import DistilBertForTokenClassification

# load the pretrained model from huggingface
#model = DistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(uniq_labels))
model = DistilBertForTokenClassification.from_pretrained('distilbert-base-uncased', num_labels=len(uniq_labels)) 

model.to('cuda');

from transformers import Trainer, TrainingArguments

training_args = TrainingArguments(
    output_dir = model_dir +  'mitmovie_pt_distilbert_uncased/results',          # output directory
    #overwrite_output_dir = True,
    evaluation_strategy='epoch',
    num_train_epochs=3,              # total number of training epochs
    per_device_train_batch_size=16,  # batch size per device during training
    per_device_eval_batch_size=64,   # batch size for evaluation
    warmup_steps=500,                # number of warmup steps for learning rate scheduler
    weight_decay=0.01,               # strength of weight decay
    logging_dir = model_dir +  'mitmovie_pt_distilbert_uncased/logs',            # directory for storing logs
    logging_steps=10,
    load_best_model_at_end = True
)

trainer = Trainer(
    model = model,                         # the instantiated 🤗 Transformers model to be trained
    args = training_args,                  # training arguments, defined above
    train_dataset = train_dataset,         # training dataset
    eval_dataset = test_dataset             # evaluation dataset
)

trainer.train()

trainer.evaluate()

model_dir = '/content/drive/My Drive/Colab Notebooks/models/'
trainer.save_model(model_dir + 'mitmovie_pt_distilbert_uncased/model')

# alternative saving method and folder
model.save_pretrained(model_dir + 'distilbert_testing')

重新启动后返回笔记本...

from transformers import DistilBertForTokenClassification, DistilBertConfig, AutoModelForTokenClassification

# retreive the saved model 
model = DistilBertForTokenClassification.from_pretrained(model_dir + 'mitmovie_pt_distilbert_uncased/model', 
                                                        local_files_only=True)

model.to('cuda')

现在,这两个目录中的模型预测都很糟糕,但是,模型确实起作用并输出了我预期的类数,看起来实际训练的权重尚未保存或不知何故未加载。

推荐答案

您是否尝试加载培训师在文件夹中保存的模型:

mitmovie_pt_distilbert_uncased/results

HuggingFace培训师将模型直接保存到定义的输出_目录。

这篇关于节省和重新装入紧凑面微调变压器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆