espnet2.schedulers.exponential_decay_warmup.ExponentialDecayWarmup
espnet2.schedulers.exponential_decay_warmup.ExponentialDecayWarmup
class espnet2.schedulers.exponential_decay_warmup.ExponentialDecayWarmup(optimizer: Optimizer, max_lr: float, min_lr: float, total_steps: int, warmup_steps: int = 0, warm_from_zero: bool = False, last_epoch: int = -1)
Bases: _LRScheduler, AbsBatchStepScheduler
Exponential Decay with Warmup.
if step < warmup_steps: : if warm_from_zero: : lr = initial_lr * (step / warmup_steps)
else: : lr = initial_lr <br/> else: : decay_factor = (step - warmup_steps) / (total_steps - warmup_steps) lr = initial_lr * exp(decay_factor * log(final_lr / initial_lr))
- Parameters:
- optimizer (Optimizer) – Wrapped optimizer.
- max_lr (float) – Initial learning rate (before decay).
- min_lr (float) – Final learning rate (after decay).
- total_steps (int) – Total number of steps (epochs * iters per epoch).
- warmup_steps (int) – Number of warmup steps. Default: 0.
- warm_from_zero (bool) – If True, warmup starts from 0 to initial_lr.
- last_epoch (int) – The index of last step. Default: -1.
get_lr()
Compute the next learning rate for each of the optimizer’s param_groups.
- Returns: A
listof learning rates for each of the optimizer’sparam_groupswith the same types as their currentgroup["lr"]s. - Return type: list[float | Tensor]
####### NOTE If you’re trying to inspect the most recent learning rate, use get_last_lr() instead.
####### NOTE The returned Tensors are copies, and never alias the optimizer’s group["lr"]s.
init_lr()
step(epoch: int = None)
Step the scheduler.
Parameters:epoch (int , optional) –
Deprecated
Deprecated since version 1.4: If provided, sets
last_epochtoepochand uses_get_closed_form_lr()if it is available. This is not universally supported. Usestep()without arguments instead.
####### NOTE Call this method after calling the optimizer’s step().
