espnet2.spk.encoder.conformer_encoder.MfaConformerEncoder
espnet2.spk.encoder.conformer_encoder.MfaConformerEncoder
class espnet2.spk.encoder.conformer_encoder.MfaConformerEncoder(input_size: int, output_size: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str | None = 'conv2d2', normalize_before: bool = True, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 3, macaron_style: bool = False, rel_pos_type: str = 'legacy', pos_enc_layer_type: str = 'rel_pos', selfattention_layer_type: str = 'rel_selfattn', activation_type: str = 'swish', use_cnn_module: bool = True, zero_triu: bool = False, cnn_module_kernel: int = 31, stochastic_depth_rate: float | List[float] = 0.0, layer_drop_rate: float = 0.0, max_pos_emb_len: int = 5000, padding_idx: int | None = None)
Bases: AbsEncoder
Conformer encoder module for MFA-Conformer.
Paper: Y. Zhang et al.,
``Mfa-conformer: Multi-scale feature aggregation conformer for automatic speaker verification,ββ in Proc. INTERSPEECH, 2022.
- Parameters:
- input_size (int) β Input dimension.
- output_size (int) β Dimension of attention.
- attention_heads (int) β The number of heads of multi head attention.
- linear_units (int) β The number of units of position-wise feed forward.
- num_blocks (int) β The number of encoder blocks.
- dropout_rate (float) β Dropout rate.
- attention_dropout_rate (float) β Dropout rate in attention.
- positional_dropout_rate (float) β Dropout rate after adding positional encoding.
- input_layer (Union *[*str , torch.nn.Module ]) β Input layer type.
- normalize_before (bool) β Whether to use layer_norm before the first block.
- positionwise_layer_type (str) β βlinearβ, βconv1dβ, or βconv1d-linearβ.
- positionwise_conv_kernel_size (int) β Kernel size of positionwise conv1d layer.
- rel_pos_type (str) β Whether to use the latest relative positional encoding or the legacy one. The legacy relative positional encoding will be deprecated in the future. More Details can be found in https://github.com/espnet/espnet/pull/2816.
- encoder_pos_enc_layer_type (str) β Encoder positional encoding layer type.
- encoder_attn_layer_type (str) β Encoder attention layer type.
- activation_type (str) β Encoder activation function type.
- macaron_style (bool) β Whether to use macaron style for positionwise layer.
- use_cnn_module (bool) β Whether to use convolution module.
- zero_triu (bool) β Whether to zero the upper triangular part of attention matrix.
- cnn_module_kernel (int) β Kernerl size of convolution module.
- padding_idx (int) β Padding idx for input_layer=embed.
Initialize internal Module state, shared by both nn.Module and ScriptModule.
forward(x: Tensor) β Tuple[Tensor, Tensor, Tensor | None]
Calculate forward propagation.
- Parameters:x (torch.Tensor) β Input tensor (#batch, L, input_size).
- Returns: Output tensor (#batch, L, output_size).
- Return type: torch.Tensor
output_size() β int
