espnet2.asr_transducer.beam_search_transducer.ExtendedHypothesis
Less than 1 minute
espnet2.asr_transducer.beam_search_transducer.ExtendedHypothesis
class espnet2.asr_transducer.beam_search_transducer.ExtendedHypothesis(score: float, yseq: List[int], dec_state: Tuple[Tensor, Tensor | None] | None = None, lm_state: Dict[str, Any] | List[Any] | None = None, dec_out: Tensor | None = None, lm_score: Tensor | None = None)
Bases: Hypothesis
Extended hypothesis definition for NSC beam search and mAES.
This class extends the default hypothesis to include additional attributes specifically useful for the NSC beam search and modified Adaptive Expansion Search (mAES) algorithms.
dec_out
Decoder output sequence. Shape: (B, D_dec)
- Type: torch.Tensor
lm_score
Log-probabilities of the language model for the given label. Shape: (vocab_size)
Type: torch.Tensor
Parameters:
- score – Total log-probability.
- yseq – Label sequence as integer ID sequence.
- dec_state – RNN/MEGA Decoder state (None if Stateless).
- lm_state – RNNLM state. ((N, D_lm), (N, D_lm)) or None.
- dec_out – Decoder output sequence. Shape: (B, D_dec).
- lm_score – Log-probabilities of the LM for given label. Shape: (vocab_size).
Examples
>>> hyp = ExtendedHypothesis(
... score=-1.0,
... yseq=[1, 2, 3],
... dec_state=(torch.tensor([0.5]), None),
... lm_state=None,
... dec_out=torch.tensor([[0.1, 0.2], [0.3, 0.4]]),
... lm_score=torch.tensor([0.5, 0.6])
... )
>>> print(hyp.score)
-1.0
>>> print(hyp.yseq)
[1, 2, 3]
dec_out
lm_score