espnet2.asr.encoder.contextual_block_conformer_encoder.ContextualBlockConformerEncoder
espnet2.asr.encoder.contextual_block_conformer_encoder.ContextualBlockConformerEncoder
class espnet2.asr.encoder.contextual_block_conformer_encoder.ContextualBlockConformerEncoder(input_size: int, output_size: int = 256, attention_heads: int = 4, linear_units: int = 2048, num_blocks: int = 6, dropout_rate: float = 0.1, positional_dropout_rate: float = 0.1, attention_dropout_rate: float = 0.0, input_layer: str | None = 'conv2d', normalize_before: bool = True, concat_after: bool = False, positionwise_layer_type: str = 'linear', positionwise_conv_kernel_size: int = 3, macaron_style: bool = False, pos_enc_class=<class 'espnet2.legacy.nets.pytorch_backend.transformer.embedding.StreamPositionalEncoding'>, selfattention_layer_type: str = 'rel_selfattn', activation_type: str = 'swish', use_cnn_module: bool = True, cnn_module_kernel: int = 31, padding_idx: int = -1, block_size: int = 40, hop_size: int = 16, look_ahead: int = 16, init_average: bool = True, ctx_pos_enc: bool = True)
Bases: AbsEncoder
Contextual Block Conformer encoder module.
- Parameters:
- input_size β input dim
- output_size β dimension of attention
- attention_heads β the number of heads of multi head attention
- linear_units β the number of units of position-wise feed forward
- num_blocks β the number of decoder blocks
- dropout_rate β dropout rate
- attention_dropout_rate β dropout rate in attention
- positional_dropout_rate β dropout rate after adding positional encoding
- input_layer β input layer type
- pos_enc_class β PositionalEncoding or ScaledPositionalEncoding
- normalize_before β whether to use layer_norm before the first block
- concat_after β whether to concat attention layerβs input and output if True, additional linear will be applied. i.e. x -> x + linear(concat(x, att(x))) if False, no additional linear will be applied. i.e. x -> x + att(x)
- positionwise_layer_type β linear of conv1d
- positionwise_conv_kernel_size β kernel size of positionwise conv1d layer
- padding_idx β padding_idx for input_layer=embed
- block_size β block size for contextual block processing
- hop_Size β hop size for block processing
- look_ahead β look-ahead size for block_processing
- init_average β whether to use average as initial context (otherwise max values)
- ctx_pos_enc β whether to use positional encoding to the context vectors
Initialize internal Module state, shared by both nn.Module and ScriptModule.
forward(xs_pad: Tensor, ilens: Tensor, prev_states: Tensor = None, is_final=True, infer_mode=False) β Tuple[Tensor, Tensor, Tensor | None]
Embed positions in tensor.
- Parameters:
- xs_pad β input tensor (B, L, D)
- ilens β input length (B)
- prev_states β Not to be used now.
- infer_mode β whether to be used for inference. This is used to distinguish between forward_train (train and validate) and forward_infer (decode).
- Returns: position embedded tensor and mask
forward_infer(xs_pad: Tensor, ilens: Tensor, prev_states: Tensor = None, is_final: bool = True) β Tuple[Tensor, Tensor, Tensor | None]
Embed positions in tensor.
- Parameters:
- xs_pad β input tensor (B, L, D)
- ilens β input length (B)
- prev_states β Not to be used now.
- Returns: position embedded tensor and mask
forward_train(xs_pad: Tensor, ilens: Tensor, prev_states: Tensor = None) β Tuple[Tensor, Tensor, Tensor | None]
Embed positions in tensor.
- Parameters:
- xs_pad β input tensor (B, L, D)
- ilens β input length (B)
- prev_states β Not to be used now.
- Returns: position embedded tensor and mask
output_size() β int
