espnet2.asr.state_spaces.components.LinearActivation
About 1 min
espnet2.asr.state_spaces.components.LinearActivation
espnet2.asr.state_spaces.components.LinearActivation(d_input, d_output, bias=True, zero_bias_init=False, transposed=False, initializer=None, activation=None, activate=False, weight_norm=False, **kwargs)
Create a linear module with optional activation and initialization.
This function constructs a linear layer with specified input and output dimensions, applies optional weight initialization, and includes an activation function if specified. The linear layer can be transposed, and weight normalization can be applied as well.
- Parameters:
- d_input (int) – The number of input features.
- d_output (int) – The number of output features.
- bias (bool , optional) – If True, adds a learnable bias to the output. Default is True.
- zero_bias_init (bool , optional) – If True, initializes the bias to zero. Default is False.
- transposed (bool , optional) – If True, creates a transposed linear layer instead of a standard one. Default is False.
- initializer (str , optional) – The type of weight initializer to use. Options include “uniform”, “normal”, “xavier”, “zero”, “one”.
- activation (str , optional) – The activation function to apply after the linear transformation. Options include “relu”, “tanh”, “sigmoid”, “gelu”, “swish”, “silu”, “glu”, “sqrelu”, or “ln”.
- activate (bool , optional) – If True, applies the activation function as part of this module. Default is False.
- weight_norm (bool , optional) – If True, applies weight normalization to the linear layer. Default is False.
- **kwargs – Additional arguments for the linear layer constructor.
- Returns: A sequential module containing the linear layer and, if specified, the activation function.
- Return type: nn.Module
Examples
>>> linear_layer = LinearActivation(d_input=128, d_output=64,
... activation='relu', bias=True)
>>> output = linear_layer(torch.randn(10, 128))
>>> transposed_layer = LinearActivation(d_input=64, d_output=128,
... transposed=True, weight_norm=True)
>>> output = transposed_layer(torch.randn(10, 64, 1))
NOTE
- The function raises a NotImplementedError if the specified activation function is not implemented.
- Ensure that the input tensor shape matches the expected dimensions for the linear layer.