espnet2.gan_tts.jets.jets.JETS
espnet2.gan_tts.jets.jets.JETS
class espnet2.gan_tts.jets.jets.JETS(idim: int, odim: int, sampling_rate: int = 22050, generator_type: str = 'jets_generator', generator_params: Dict[str, Any] = {'adim': 256, 'aheads': 2, 'conformer_activation_type': 'swish', 'conformer_dec_kernel_size': 31, 'conformer_enc_kernel_size': 7, 'conformer_pos_enc_layer_type': 'rel_pos', 'conformer_rel_pos_type': 'latest', 'conformer_self_attn_layer_type': 'rel_selfattn', 'decoder_concat_after': False, 'decoder_normalize_before': True, 'decoder_type': 'transformer', 'dlayers': 4, 'dunits': 1024, 'duration_predictor_chans': 384, 'duration_predictor_dropout_rate': 0.1, 'duration_predictor_kernel_size': 3, 'duration_predictor_layers': 2, 'elayers': 4, 'encoder_concat_after': False, 'encoder_normalize_before': True, 'encoder_type': 'transformer', 'energy_embed_dropout': 0.5, 'energy_embed_kernel_size': 1, 'energy_predictor_chans': 384, 'energy_predictor_dropout': 0.5, 'energy_predictor_kernel_size': 3, 'energy_predictor_layers': 2, 'eunits': 1024, 'generator_bias': True, 'generator_channels': 512, 'generator_global_channels': -1, 'generator_kernel_size': 7, 'generator_nonlinear_activation': 'LeakyReLU', 'generator_nonlinear_activation_params': {'negative_slope': 0.1}, 'generator_out_channels': 1, 'generator_resblock_dilations': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'generator_resblock_kernel_sizes': [3, 7, 11], 'generator_upsample_kernel_sizes': [16, 16, 4, 4], 'generator_upsample_scales': [8, 8, 2, 2], 'generator_use_additional_convs': True, 'generator_use_weight_norm': True, 'gst_conv_chans_list': [32, 32, 64, 64, 128, 128], 'gst_conv_kernel_size': 3, 'gst_conv_layers': 6, 'gst_conv_stride': 2, 'gst_gru_layers': 1, 'gst_gru_units': 128, 'gst_heads': 4, 'gst_tokens': 10, 'init_dec_alpha': 1.0, 'init_enc_alpha': 1.0, 'init_type': 'xavier_uniform', 'langs': -1, 'pitch_embed_dropout': 0.5, 'pitch_embed_kernel_size': 1, 'pitch_predictor_chans': 384, 'pitch_predictor_dropout': 0.5, 'pitch_predictor_kernel_size': 5, 'pitch_predictor_layers': 5, 'positionwise_conv_kernel_size': 1, 'positionwise_layer_type': 'conv1d', 'reduction_factor': 1, 'segment_size': 64, 'spk_embed_dim': None, 'spk_embed_integration_type': 'add', 'spks': -1, 'stop_gradient_from_energy_predictor': False, 'stop_gradient_from_pitch_predictor': True, 'transformer_dec_attn_dropout_rate': 0.1, 'transformer_dec_dropout_rate': 0.1, 'transformer_dec_positional_dropout_rate': 0.1, 'transformer_enc_attn_dropout_rate': 0.1, 'transformer_enc_dropout_rate': 0.1, 'transformer_enc_positional_dropout_rate': 0.1, 'use_batch_norm': True, 'use_cnn_in_conformer': True, 'use_gst': False, 'use_macaron_style_in_conformer': True, 'use_masking': False, 'use_scaled_pos_enc': True, 'use_weighted_masking': False, 'zero_triu': False}, discriminator_type: str = 'hifigan_multi_scale_multi_period_discriminator', discriminator_params: Dict[str, Any] = {'follow_official_norm': False, 'period_discriminator_params': {'bias': True, 'channels': 32, 'downsample_scales': [3, 3, 3, 3, 1], 'in_channels': 1, 'kernel_sizes': [5, 3], 'max_downsample_channels': 1024, 'nonlinear_activation': 'LeakyReLU', 'nonlinear_activation_params': {'negative_slope': 0.1}, 'out_channels': 1, 'use_spectral_norm': False, 'use_weight_norm': True}, 'periods': [2, 3, 5, 7, 11], 'scale_discriminator_params': {'bias': True, 'channels': 128, 'downsample_scales': [2, 2, 4, 4, 1], 'in_channels': 1, 'kernel_sizes': [15, 41, 5, 3], 'max_downsample_channels': 1024, 'max_groups': 16, 'nonlinear_activation': 'LeakyReLU', 'nonlinear_activation_params': {'negative_slope': 0.1}, 'out_channels': 1, 'use_spectral_norm': False, 'use_weight_norm': True}, 'scale_downsample_pooling': 'AvgPool1d', 'scale_downsample_pooling_params': {'kernel_size': 4, 'padding': 2, 'stride': 2}, 'scales': 1}, generator_adv_loss_params: Dict[str, Any] = {'average_by_discriminators': False, 'loss_type': 'mse'}, discriminator_adv_loss_params: Dict[str, Any] = {'average_by_discriminators': False, 'loss_type': 'mse'}, feat_match_loss_params: Dict[str, Any] = {'average_by_discriminators': False, 'average_by_layers': False, 'include_final_outputs': True}, mel_loss_params: Dict[str, Any] = {'fmax': None, 'fmin': 0, 'fs': 22050, 'hop_length': 256, 'log_base': None, 'n_fft': 1024, 'n_mels': 80, 'win_length': None, 'window': 'hann'}, lambda_adv: float = 1.0, lambda_mel: float = 45.0, lambda_feat_match: float = 2.0, lambda_var: float = 1.0, lambda_align: float = 2.0, cache_generator_outputs: bool = True, plot_pred_mos: bool = False, mos_pred_tool: str = 'utmos')
Bases: AbsGANTTS
JETS module (generator + discriminator).
This is a module of JETS described in
`JETS: Jointly Training FastSpeech2
and HiFi-GAN for End to End Text to Speech`_
.
generator
Instance of the generator class.
- Type:JETSGenerator
discriminator
Instance of the discriminator class.
generator
Generator adversarial loss instance.
discriminator
Discriminator adversarial loss instance.
feat_match_loss
Feature match loss instance.
mel_loss
Mel spectrogram loss instance.
var_loss
Variance loss instance.
forwardsum_loss
Forward sum loss instance.
lambda_adv
Loss scaling coefficient for adversarial loss.
- Type: float
lambda_mel
Loss scaling coefficient for mel spectrogram loss.
- Type: float
lambda_feat_match
Loss scaling coefficient for feat match loss.
- Type: float
lambda_var
Loss scaling coefficient for variance loss.
- Type: float
lambda_align
Loss scaling coefficient for alignment loss.
- Type: float
cache_generator_outputs
Whether to cache generator outputs.
- Type: bool
plot_pred_mos
Whether to plot predicted MOS during training.
- Type: bool
fs
Sampling rate for saving waveform during inference.
Type: int
Parameters:
- idim (int) – Input vocabulary size.
- odim (int) – Acoustic feature dimension. The actual output channels will be 1 since JETS is the end-to-end text-to-wave model but for the compatibility odim is used to indicate the acoustic feature dimension.
- sampling_rate (int) – Sampling rate, not used for training but will be referred to when saving waveform during inference.
- generator_type (str) – Type of the generator.
- generator_params (Dict *[*str , Any ]) – Parameter dictionary for generator.
- discriminator_type (str) – Type of the discriminator.
- discriminator_params (Dict *[*str , Any ]) – Parameter dictionary for discriminator.
- generator_adv_loss_params (Dict *[*str , Any ]) – Parameter dictionary for generator adversarial loss.
- discriminator_adv_loss_params (Dict *[*str , Any ]) – Parameter dictionary for discriminator adversarial loss.
- feat_match_loss_params (Dict *[*str , Any ]) – Parameter dictionary for feature match loss.
- mel_loss_params (Dict *[*str , Any ]) – Parameter dictionary for mel loss.
- lambda_adv (float) – Loss scaling coefficient for adversarial loss.
- lambda_mel (float) – Loss scaling coefficient for mel spectrogram loss.
- lambda_feat_match (float) – Loss scaling coefficient for feature match loss.
- lambda_var (float) – Loss scaling coefficient for variance loss.
- lambda_align (float) – Loss scaling coefficient for alignment loss.
- cache_generator_outputs (bool) – Whether to cache generator outputs.
- plot_pred_mos (bool) – Whether to plot predicted MOS during training.
- mos_pred_tool (str) – MOS prediction tool name.
####### Examples
Example of initializing the JETS module
jets = JETS(idim=100, odim=80)
Example of performing a forward pass
output = jets.forward(text, text_lengths, feats, feats_lengths, speech,
speech_lengths)
- Raises:NotImplementedError – If the specified MOS prediction tool is not supported.
Initialize JETS module.
- Parameters:
- idim (int) – Input vocabrary size.
- odim (int) – Acoustic feature dimension. The actual output channels will be 1 since JETS is the end-to-end text-to-wave model but for the compatibility odim is used to indicate the acoustic feature dimension.
- sampling_rate (int) – Sampling rate, not used for the training but it will be referred in saving waveform during the inference.
- generator_type (str) – Generator type.
- generator_params (Dict *[*str , Any ]) – Parameter dict for generator.
- discriminator_type (str) – Discriminator type.
- discriminator_params (Dict *[*str , Any ]) – Parameter dict for discriminator.
- generator_adv_loss_params (Dict *[*str , Any ]) – Parameter dict for generator adversarial loss.
- discriminator_adv_loss_params (Dict *[*str , Any ]) – Parameter dict for discriminator adversarial loss.
- feat_match_loss_params (Dict *[*str , Any ]) – Parameter dict for feat match loss.
- mel_loss_params (Dict *[*str , Any ]) – Parameter dict for mel loss.
- lambda_adv (float) – Loss scaling coefficient for adversarial loss.
- lambda_mel (float) – Loss scaling coefficient for mel spectrogram loss.
- lambda_feat_match (float) – Loss scaling coefficient for feat match loss.
- lambda_var (float) – Loss scaling coefficient for variance loss.
- lambda_align (float) – Loss scaling coefficient for alignment loss.
- cache_generator_outputs (bool) – Whether to cache generator outputs.
- plot_pred_mos (bool) – Whether to plot predicted MOS during the training.
- mos_pred_tool (str) – MOS prediction tool name.
forward(text: Tensor, text_lengths: Tensor, feats: Tensor, feats_lengths: Tensor, speech: Tensor, speech_lengths: Tensor, sids: Tensor | None = None, spembs: Tensor | None = None, lids: Tensor | None = None, forward_generator: bool = True, **kwargs) → Dict[str, Any]
Perform generator forward.
This method handles the forward pass of the JETS model, performing either generator or discriminator operations based on the provided parameters.
- Parameters:
- text (Tensor) – Text index tensor of shape (B, T_text).
- text_lengths (Tensor) – Text length tensor of shape (B,).
- feats (Tensor) – Feature tensor of shape (B, T_feats, aux_channels).
- feats_lengths (Tensor) – Feature length tensor of shape (B,).
- speech (Tensor) – Speech waveform tensor of shape (B, T_wav).
- speech_lengths (Tensor) – Speech length tensor of shape (B,).
- sids (Optional *[*Tensor ]) – Speaker index tensor of shape (B,) or (B, 1).
- spembs (Optional *[*Tensor ]) – Speaker embedding tensor of shape (B, spk_embed_dim).
- lids (Optional *[*Tensor ]) – Language index tensor of shape (B,) or (B, 1).
- forward_generator (bool) – Whether to forward through the generator.
- Returns:
- loss (Tensor): Loss scalar tensor.
- stats (Dict[str, float]): Statistics to be monitored.
- weight (Tensor): Weight tensor to summarize losses.
- optim_idx (int): Optimizer index (0 for G and 1 for D).
- Return type: Dict[str, Any]
####### Examples
>>> text = torch.tensor([[1, 2, 3]])
>>> text_lengths = torch.tensor([3])
>>> feats = torch.randn(1, 10, 80)
>>> feats_lengths = torch.tensor([10])
>>> speech = torch.randn(1, 16000)
>>> speech_lengths = torch.tensor([16000])
>>> output = model.forward(text, text_lengths, feats, feats_lengths,
... speech, speech_lengths)
>>> print(output['loss'].shape)
torch.Size([])
inference(text: Tensor, feats: Tensor | None = None, pitch: Tensor | None = None, energy: Tensor | None = None, use_teacher_forcing: bool = False, **kwargs) → Dict[str, Tensor]
Run inference.
- Parameters:
- text (Tensor) – Input text index tensor (T_text,).
- feats (Tensor) – Feature tensor (T_feats, aux_channels).
- pitch (Tensor) – Pitch tensor (T_feats, 1).
- energy (Tensor) – Energy tensor (T_feats, 1).
- use_teacher_forcing (bool) – Whether to use teacher forcing.
- Returns:
- wav (Tensor): Generated waveform tensor (T_wav,).
- duration (Tensor): Predicted duration tensor (T_text,).
- Return type: Dict[str, Tensor]
property require_raw_speech
Return whether or not speech is required.
property require_vocoder
Return whether or not vocoder is required.