espnet3.components.modeling.optimization_spec.OptimizerSpec
espnet3.components.modeling.optimization_spec.OptimizerSpec
class espnet3.components.modeling.optimization_spec.OptimizerSpec(name: str, optimizer: Any, params: str, accum_grad_steps: int = 1, step_every_n_iters: int = 1, gradient_clip_val: float | None = None, gradient_clip_algorithm: str = 'norm')
Bases: object
Describe one named optimizer block from config.optimizers.
This dataclass is the normalized, validated form of one user-facing config entry such as:
```yaml optimizers:
generator: : optimizer: : _target_: torch.optim.Adam lr: 0.0002 <br/> params: generator accum_grad_steps: 2 step_every_n_iters: 1 gradient_clip_val: 1.0 gradient_clip_algorithm: norm
```It does not store the instantiated optimizer object itself. Instead, it records the policy needed by the Lightning module to:
- select which parameters belong to this optimizer,
- decide when accumulated gradients are large enough to step,
- decide how often this optimizer should update,
- and apply per-optimizer gradient clipping.
accum_grad_steps : int = 1
classmethod from_config(name: str, cfg) → OptimizerSpec
Build and validate an optimizer spec from one named config block.
This converts raw Hydra/OmegaConf config into a typed OptimizerSpec instance, applies dataclass defaults for omitted optional fields, and runs basic validation before the Lightning module instantiates the real optimizer objects.
gradient_clip_algorithm : str = 'norm'
gradient_clip_val : float | None = None
name : str
optimizer : Any
params : str
step_every_n_iters : int = 1
validate() → None
Validate update-policy settings after normalization.
This checks only spec-local rules such as positive accumulation / step intervals and valid clipping algorithm names. Parameter coverage and overlap across different optimizers are validated later by the Lightning module because those checks require seeing all optimizer specs together.
