espnet2.tts.fastspeech.fastspeech.FastSpeech
espnet2.tts.fastspeech.fastspeech.FastSpeech
class espnet2.tts.fastspeech.fastspeech.FastSpeech(idim: int, odim: int, adim: int = 384, aheads: int = 4, elayers: int = 6, eunits: int = 1536, dlayers: int = 6, dunits: int = 1536, postnet_layers: int = 5, postnet_chans: int = 512, postnet_filts: int = 5, postnet_dropout_rate: float = 0.5, positionwise_layer_type: str = 'conv1d', positionwise_conv_kernel_size: int = 1, use_scaled_pos_enc: bool = True, use_batch_norm: bool = True, encoder_normalize_before: bool = True, decoder_normalize_before: bool = True, encoder_concat_after: bool = False, decoder_concat_after: bool = False, duration_predictor_layers: int = 2, duration_predictor_chans: int = 384, duration_predictor_kernel_size: int = 3, duration_predictor_dropout_rate: float = 0.1, reduction_factor: int = 1, encoder_type: str = 'transformer', decoder_type: str = 'transformer', transformer_enc_dropout_rate: float = 0.1, transformer_enc_positional_dropout_rate: float = 0.1, transformer_enc_attn_dropout_rate: float = 0.1, transformer_dec_dropout_rate: float = 0.1, transformer_dec_positional_dropout_rate: float = 0.1, transformer_dec_attn_dropout_rate: float = 0.1, conformer_rel_pos_type: str = 'legacy', conformer_pos_enc_layer_type: str = 'rel_pos', conformer_self_attn_layer_type: str = 'rel_selfattn', conformer_activation_type: str = 'swish', use_macaron_style_in_conformer: bool = True, use_cnn_in_conformer: bool = True, conformer_enc_kernel_size: int = 7, conformer_dec_kernel_size: int = 31, zero_triu: bool = False, spks: int | None = None, langs: int | None = None, spk_embed_dim: int | None = None, spk_embed_integration_type: str = 'add', use_gst: bool = False, gst_tokens: int = 10, gst_heads: int = 4, gst_conv_layers: int = 6, gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), gst_conv_kernel_size: int = 3, gst_conv_stride: int = 2, gst_gru_layers: int = 1, gst_gru_units: int = 128, init_type: str = 'xavier_uniform', init_enc_alpha: float = 1.0, init_dec_alpha: float = 1.0, use_masking: bool = False, use_weighted_masking: bool = False)
Bases: AbsTTS
FastSpeech module for end-to-end text-to-speech.
This is a module of FastSpeech, a feed-forward Transformer with a duration predictor described in FastSpeech: Fast, Robust and Controllable Text to Speech, which does not require any auto-regressive processing during inference, resulting in fast decoding compared with auto-regressive Transformers.
idim
Dimension of the inputs.
- Type: int
odim
Dimension of the outputs.
- Type: int
eos
End-of-sequence token ID.
- Type: int
reduction_factor
Reduction factor for output features.
- Type: int
encoder_type
Type of encoder used (“transformer” or “conformer”).
- Type: str
decoder_type
Type of decoder used (“transformer” or “conformer”).
- Type: str
use_scaled_pos_enc
Flag to use scaled positional encoding.
- Type: bool
use_gst
Flag to indicate the use of global style token.
- Type: bool
padding_idx
Index used for padding in input sequences.
Type: int
Parameters:
- idim (int) – Dimension of the inputs.
- odim (int) – Dimension of the outputs.
- adim (int , optional) – Attention dimension (default: 384).
- aheads (int , optional) – Number of attention heads (default: 4).
- elayers (int , optional) – Number of encoder layers (default: 6).
- eunits (int , optional) – Number of encoder hidden units (default: 1536).
- dlayers (int , optional) – Number of decoder layers (default: 6).
- dunits (int , optional) – Number of decoder hidden units (default: 1536).
- postnet_layers (int , optional) – Number of postnet layers (default: 5).
- postnet_chans (int , optional) – Number of postnet channels (default: 512).
- postnet_filts (int , optional) – Kernel size of postnet (default: 5).
- postnet_dropout_rate (float , optional) – Dropout rate in postnet (default: 0.5).
- positionwise_layer_type (str , optional) – Type of positionwise layer (default: “conv1d”).
- positionwise_conv_kernel_size (int , optional) – Kernel size for positionwise convolution (default: 1).
- use_scaled_pos_enc (bool , optional) – Use trainable scaled positional encoding (default: True).
- use_batch_norm (bool , optional) – Use batch normalization in encoder prenet (default: True).
- encoder_normalize_before (bool , optional) – Apply layernorm before encoder block (default: True).
- decoder_normalize_before (bool , optional) – Apply layernorm before decoder block (default: True).
- encoder_concat_after (bool , optional) – Concatenate attention layer’s input and output in encoder (default: False).
- decoder_concat_after (bool , optional) – Concatenate attention layer’s input and output in decoder (default: False).
- duration_predictor_layers (int , optional) – Number of duration predictor layers (default: 2).
- duration_predictor_chans (int , optional) – Number of duration predictor channels (default: 384).
- duration_predictor_kernel_size (int , optional) – Kernel size of duration predictor (default: 3).
- duration_predictor_dropout_rate (float , optional) – Dropout rate in duration predictor (default: 0.1).
- reduction_factor (int , optional) – Reduction factor (default: 1).
- encoder_type (str , optional) – Encoder type (“transformer” or “conformer”, default: “transformer”).
- decoder_type (str , optional) – Decoder type (“transformer” or “conformer”, default: “transformer”).
- transformer_enc_dropout_rate (float , optional) – Dropout rate in encoder (default: 0.1).
- transformer_enc_positional_dropout_rate (float , optional) – Dropout rate after encoder positional encoding (default: 0.1).
- transformer_enc_attn_dropout_rate (float , optional) – Dropout rate in encoder self-attention (default: 0.1).
- transformer_dec_dropout_rate (float , optional) – Dropout rate in decoder (default: 0.1).
- transformer_dec_positional_dropout_rate (float , optional) – Dropout rate after decoder positional encoding (default: 0.1).
- transformer_dec_attn_dropout_rate (float , optional) – Dropout rate in decoder self-attention (default: 0.1).
- conformer_rel_pos_type (str , optional) – Relative pos encoding type in conformer (default: “legacy”).
- conformer_pos_enc_layer_type (str , optional) – Pos encoding layer type in conformer (default: “rel_pos”).
- conformer_self_attn_layer_type (str , optional) – Self-attention layer type in conformer (default: “rel_selfattn”).
- conformer_activation_type (str , optional) – Activation function type in conformer (default: “swish”).
- use_macaron_style_in_conformer (bool , optional) – Use macaron style FFN in conformer (default: True).
- use_cnn_in_conformer (bool , optional) – Use CNN in conformer (default: True).
- conformer_enc_kernel_size (int , optional) – Kernel size of encoder conformer (default: 7).
- conformer_dec_kernel_size (int , optional) – Kernel size of decoder conformer (default: 31).
- zero_triu (bool , optional) – Use zero triu in relative self-attention (default: False).
- spks (Optional *[*int ] , optional) – Number of speakers (default: None).
- langs (Optional *[*int ] , optional) – Number of languages (default: None).
- spk_embed_dim (Optional *[*int ] , optional) – Speaker embedding dimension (default: None).
- spk_embed_integration_type (str , optional) – How to integrate speaker embedding (default: “add”).
- use_gst (bool , optional) – Whether to use global style token (default: False).
- gst_tokens (int , optional) – Number of GST embeddings (default: 10).
- gst_heads (int , optional) – Number of heads in GST multihead attention (default: 4).
- gst_conv_layers (int , optional) – Number of conv layers in GST (default: 6).
- gst_conv_chans_list (Sequence *[*int ] , optional) – Number of channels in conv layers in GST (default: (32, 32, 64, 64, 128, 128)).
- gst_conv_kernel_size (int , optional) – Kernel size of conv layers in GST (default: 3).
- gst_conv_stride (int , optional) – Stride size of conv layers in GST (default: 2).
- gst_gru_layers (int , optional) – Number of GRU layers in GST (default: 1).
- gst_gru_units (int , optional) – Number of GRU units in GST (default: 128).
- init_type (str , optional) – How to initialize transformer parameters (default: “xavier_uniform”).
- init_enc_alpha (float , optional) – Initial value of alpha in scaled pos encoding of encoder (default: 1.0).
- init_dec_alpha (float , optional) – Initial value of alpha in scaled pos encoding of decoder (default: 1.0).
- use_masking (bool , optional) – Whether to apply masking for padded parts in loss calculation (default: False).
- use_weighted_masking (bool , optional) – Whether to apply weighted masking in loss calculation (default: False).
####### Examples
>>> model = FastSpeech(idim=256, odim=80)
>>> input_tensor = torch.randint(0, 256, (1, 10))
>>> input_lengths = torch.tensor([10])
>>> output = model.forward(input_tensor, input_lengths, ...)
Initialize FastSpeech module.
- Parameters:
- idim (int) – Dimension of the inputs.
- odim (int) – Dimension of the outputs.
- elayers (int) – Number of encoder layers.
- eunits (int) – Number of encoder hidden units.
- dlayers (int) – Number of decoder layers.
- dunits (int) – Number of decoder hidden units.
- postnet_layers (int) – Number of postnet layers.
- postnet_chans (int) – Number of postnet channels.
- postnet_filts (int) – Kernel size of postnet.
- postnet_dropout_rate (float) – Dropout rate in postnet.
- use_scaled_pos_enc (bool) – Whether to use trainable scaled pos encoding.
- use_batch_norm (bool) – Whether to use batch normalization in encoder prenet.
- encoder_normalize_before (bool) – Whether to apply layernorm layer before encoder block.
- decoder_normalize_before (bool) – Whether to apply layernorm layer before decoder block.
- encoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in encoder.
- decoder_concat_after (bool) – Whether to concatenate attention layer’s input and output in decoder.
- duration_predictor_layers (int) – Number of duration predictor layers.
- duration_predictor_chans (int) – Number of duration predictor channels.
- duration_predictor_kernel_size (int) – Kernel size of duration predictor.
- duration_predictor_dropout_rate (float) – Dropout rate in duration predictor.
- reduction_factor (int) – Reduction factor.
- encoder_type (str) – Encoder type (“transformer” or “conformer”).
- decoder_type (str) – Decoder type (“transformer” or “conformer”).
- transformer_enc_dropout_rate (float) – Dropout rate in encoder except attention and positional encoding.
- transformer_enc_positional_dropout_rate (float) – Dropout rate after encoder positional encoding.
- transformer_enc_attn_dropout_rate (float) – Dropout rate in encoder self-attention module.
- transformer_dec_dropout_rate (float) – Dropout rate in decoder except attention & positional encoding.
- transformer_dec_positional_dropout_rate (float) – Dropout rate after decoder positional encoding.
- transformer_dec_attn_dropout_rate (float) – Dropout rate in decoder self-attention module.
- conformer_rel_pos_type (str) – Relative pos encoding type in conformer.
- conformer_pos_enc_layer_type (str) – Pos encoding layer type in conformer.
- conformer_self_attn_layer_type (str) – Self-attention layer type in conformer
- conformer_activation_type (str) – Activation function type in conformer.
- use_macaron_style_in_conformer – Whether to use macaron style FFN.
- use_cnn_in_conformer – Whether to use CNN in conformer.
- conformer_enc_kernel_size – Kernel size of encoder conformer.
- conformer_dec_kernel_size – Kernel size of decoder conformer.
- zero_triu – Whether to use zero triu in relative self-attention module.
- spks (Optional *[*int ]) – Number of speakers. If set to > 1, assume that the sids will be provided as the input and use sid embedding layer.
- langs (Optional *[*int ]) – Number of languages. If set to > 1, assume that the lids will be provided as the input and use sid embedding layer.
- spk_embed_dim (Optional *[*int ]) – Speaker embedding dimension. If set to > 0, assume that spembs will be provided as the input.
- spk_embed_integration_type – How to integrate speaker embedding.
- use_gst (str) – Whether to use global style token.
- gst_tokens (int) – The number of GST embeddings.
- gst_heads (int) – The number of heads in GST multihead attention.
- gst_conv_layers (int) – The number of conv layers in GST.
- gst_conv_chans_list – (Sequence[int]): List of the number of channels of conv layers in GST.
- gst_conv_kernel_size (int) – Kernel size of conv layers in GST.
- gst_conv_stride (int) – Stride size of conv layers in GST.
- gst_gru_layers (int) – The number of GRU layers in GST.
- gst_gru_units (int) – The number of GRU units in GST.
- init_type (str) – How to initialize transformer parameters.
- init_enc_alpha (float) – Initial value of alpha in scaled pos encoding of the encoder.
- init_dec_alpha (float) – Initial value of alpha in scaled pos encoding of the decoder.
- use_masking (bool) – Whether to apply masking for padded part in loss calculation.
- use_weighted_masking (bool) – Whether to apply weighted masking in loss calculation.
forward(text: Tensor, text_lengths: Tensor, feats: Tensor, feats_lengths: Tensor, durations: Tensor, durations_lengths: Tensor, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, joint_training: bool = False) → Tuple[Tensor, Dict[str, Tensor], Tensor]
Calculate forward propagation.
This method performs the forward pass for the FastSpeech model, taking in the input text, target features, and durations. It computes the loss and outputs the statistics for monitoring during training.
- Parameters:
- text (LongTensor) – Batch of padded character ids (B, T_text).
- text_lengths (LongTensor) – Batch of lengths of each input (B,).
- feats (Tensor) – Batch of padded target features (B, T_feats, odim).
- feats_lengths (LongTensor) – Batch of the lengths of each target (B,).
- durations (LongTensor) – Batch of padded durations (B, T_text + 1).
- durations_lengths (LongTensor) – Batch of duration lengths (B, T_text + 1).
- spembs (Optional *[*Tensor ]) – Batch of speaker embeddings (B, spk_embed_dim).
- sids (Optional *[*Tensor ]) – Batch of speaker IDs (B, 1).
- lids (Optional *[*Tensor ]) – Batch of language IDs (B, 1).
- joint_training (bool) – Whether to perform joint training with vocoder.
- Returns: A tuple containing: : - Tensor: Loss scalar value.
- Dict: Statistics to be monitored.
- Tensor: Weight value if not joint training else model outputs.
- Return type: Tuple[Tensor, Dict[str, torch.Tensor], Tensor]
####### Examples
>>> text = torch.randint(0, 100, (2, 5)) # (B, T_text)
>>> text_lengths = torch.tensor([5, 3]) # (B,)
>>> feats = torch.randn(2, 6, 80) # (B, T_feats, odim)
>>> feats_lengths = torch.tensor([6, 4]) # (B,)
>>> durations = torch.randint(1, 10, (2, 6)) # (B, T_text + 1)
>>> durations_lengths = torch.tensor([6, 4]) # (B,)
>>> loss, stats, weight = model.forward(text, text_lengths, feats,
... feats_lengths, durations,
... durations_lengths)
inference(text: Tensor, feats: Tensor | None = None, durations: Tensor | None = None, spembs: Tensor | None = None, sids: Tensor | None = None, lids: Tensor | None = None, alpha: float = 1.0, use_teacher_forcing: bool = False) → Dict[str, Tensor]
Generate the sequence of features given the sequences of characters.
- Parameters:
- text (LongTensor) – Input sequence of characters (T_text,).
- feats (Optional *[*Tensor ]) – Feature sequence to extract style (N, idim).
- durations (Optional *[*LongTensor ]) – Groundtruth of duration (T_text + 1,).
- spembs (Optional *[*Tensor ]) – Speaker embedding (spk_embed_dim,).
- sids (Optional *[*Tensor ]) – Speaker ID (1,).
- lids (Optional *[*Tensor ]) – Language ID (1,).
- alpha (float) – Alpha to control the speed.
- use_teacher_forcing (bool) – Whether to use teacher forcing. If true, groundtruth of duration, pitch and energy will be used.
- Returns: Output dict including the following items: : * feat_gen (Tensor): Output sequence of features (T_feats, odim).
- duration (Tensor): Duration sequence (T_text + 1,).
- Return type: Dict[str, Tensor]