PyTorch:学习率调度器 [英] PyTorch: Learning rate scheduler

查看:23
本文介绍了PyTorch:学习率调度器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何将学习率调度器与以下优化器结合使用?

optimizer = torch.optim.Adam(optim_params,betas=(args.momentum, args.beta), weight_decay=args.weight_decay)

我编写了以下调度程序:

scheduler = torch.optim.lr_scheduler.StepLR(优化器,step_size=100,gamma=0.9)

我不确定我应该执行调度程序还是优化程序.我应该按照哪个顺序执行以下操作?

optimizer.zero_grad()调度程序.step()优化器.step()

解决方案

1.3 起,行为已更改,请参阅 发布这个问题尤其.>

在这个版本之前,你应该在 optimizer 之前step scheduler,这在 IMO 是不合理的.有一些来回(实际上它破坏了向后兼容性,IMO 为这样一个小小的不便而破坏它不是一个好主意),但目前你应该在 optimizer 之后执行 scheduler>.

optimizer.zero_grad()优化器.step()调度程序.step()

How do I use a learning rate scheduler with the following optimizer?

optimizer = torch.optim.Adam(optim_params,betas=(args.momentum, args.beta), weight_decay=args.weight_decay)

I have written the following scheduler:

scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=100, gamma=0.9)

I am not sure about whether I should step the scheduler or the optimizer. Which order should I take to perform the following?

optimizer.zero_grad()
scheduler.step()
optimizer.step()

解决方案

Since 1.3 the behaviour was changed, see releases and this issue especially.

Before this version, you should step scheduler before optimizer, which IMO wasn't reasonable. There was some back and forth (actually it breaks backward compatibility and IMO it's not a good idea to break it for such a minor inconvenience), but currently you should step scheduler after optimizer.

optimizer.zero_grad()
optimizer.step()
scheduler.step()

这篇关于PyTorch:学习率调度器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆