在TensorFlow2中使用学习速率计划和学习速率预热 [英] Using learning rate schedule and learning rate warmup with TensorFlow2
本文介绍了在TensorFlow2中使用学习速率计划和学习速率预热的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我使用的是Python3和TensorFlow2,其中训练数据集有50000个示例,批处理大小=64。一个历元内的训练迭代次数=50000/64=781次迭代(约)。 如何在代码中同时使用学习速率预热和学习速率衰减?
目前,我使用的是学习率衰减率:
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
print("
Current step value: {0}, LR: {1:.6f}
".format(optimizer.iterations.numpy(), optimizer.learning_rate(optimizer.iterations)))
但是,我不知道如何在学习速度下降的同时使用学习速度预热。
帮助?
推荐答案
如何使用变形金刚库中的implementation?
from typing import Callable
import tensorflow as tf
class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(
self,
initial_learning_rate: float,
decay_schedule_fn: Callable,
warmup_steps: int,
power: float = 1.0,
name: str = None,
):
super().__init__()
self.initial_learning_rate = initial_learning_rate
self.warmup_steps = warmup_steps
self.power = power
self.decay_schedule_fn = decay_schedule_fn
self.name = name
def __call__(self, step):
with tf.name_scope(self.name or "WarmUp") as name:
# Implements polynomial warmup. i.e., if global_step < warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * init_lr`.
global_step_float = tf.cast(step, tf.float32)
warmup_steps_float = tf.cast(self.warmup_steps, tf.float32)
warmup_percent_done = global_step_float / warmup_steps_float
warmup_learning_rate = self.initial_learning_rate * tf.math.pow(warmup_percent_done, self.power)
return tf.cond(
global_step_float < warmup_steps_float,
lambda: warmup_learning_rate,
lambda: self.decay_schedule_fn(step - self.warmup_steps),
name=name,
)
def get_config(self):
return {
"initial_learning_rate": self.initial_learning_rate,
"decay_schedule_fn": self.decay_schedule_fn,
"warmup_steps": self.warmup_steps,
"power": self.power,
"name": self.name,
}
这篇关于在TensorFlow2中使用学习速率计划和学习速率预热的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文