espnet2.legacy.nets.batch_beam_search.BatchBeamSearch
espnet2.legacy.nets.batch_beam_search.BatchBeamSearch
class espnet2.legacy.nets.batch_beam_search.BatchBeamSearch(scorers: Dict[str, ScorerInterface], weights: Dict[str, float], beam_size: int, vocab_size: int, sos: int, eos: int, token_list: List[str] = None, pre_beam_ratio: float = 1.5, pre_beam_score_key: str = None, return_hs: bool = False, hyp_primer: List[int] = None, normalize_length: bool = False)
Bases: BeamSearch
Batch beam search implementation.
Initialize beam search.
- Parameters:
- scorers (dict *[*str , ScorerInterface ]) β Dict of decoder modules e.g., Decoder, CTCPrefixScorer, LM The scorer will be ignored if it is None
- weights (dict *[*str , float ]) β Dict of weights for each scorers The scorer will be ignored if its weight is 0
- beam_size (int) β The number of hypotheses kept during search
- vocab_size (int) β The number of vocabulary
- sos (int) β Start of sequence id
- eos (int) β End of sequence id
- token_list (list *[*str ]) β List of tokens for debug log
- pre_beam_score_key (str) β key of scores to perform pre-beam search
- pre_beam_ratio (float) β beam size in the pre-beam search will be int(pre_beam_ratio * beam_size)
- return_hs (bool) β Whether to return hidden intermediates
- normalize_length (bool) β If true, select the best ended hypotheses based on length-normalized scores rather than the accumulated scores
batch_beam(weighted_scores: Tensor, ids: Tensor) β Tuple[Tensor, Tensor, Tensor, Tensor]
Batch-compute topk full token ids and partial token ids.
- Parameters:
- weighted_scores (torch.Tensor) β The weighted sum scores for each tokens. Its shape is (n_beam, self.vocab_size).
- ids (torch.Tensor) β The partial token ids to compute topk. Its shape is (n_beam, self.pre_beam_size).
- Returns: The topk full (prev_hyp, new_token) ids and partial (prev_hyp, new_token) ids. Their shapes are all (self.beam_size,)
- Return type: Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
batchfy(hyps: List[Hypothesis]) β BatchHypothesis
Convert list to batch.
init_hyp(x: Tensor) β BatchHypothesis
Get an initial hypothesis data.
- Parameters:x (torch.Tensor) β The encoder output feature
- Returns: The initial hypothesis.
- Return type:Hypothesis
merge_states(states: Any, part_states: Any, part_idx: int) β Any
Merge states for new hypothesis.
- Parameters:
- states β states of self.full_scorers
- part_states β states of self.part_scorers
- part_idx (int) β The new token id for part_scores
- Returns: The new score dict. : Its keys are names of self.full_scorers and self.part_scorers. Its values are states of the scorers.
- Return type: Dict[str, torch.Tensor]
post_process(i: int, maxlen: int, minlen: int, maxlenratio: float, running_hyps: BatchHypothesis, ended_hyps: List[Hypothesis]) β BatchHypothesis
Perform post-processing of beam search iterations.
- Parameters:
- i (int) β The length of hypothesis tokens.
- maxlen (int) β The maximum length of tokens in beam search.
- maxlenratio (int) β The maximum length ratio in beam search.
- running_hyps (BatchHypothesis) β The running hypotheses in beam search.
- ended_hyps (List [Hypothesis ]) β The ended hypotheses in beam search.
- Returns: The new running hypotheses.
- Return type:BatchHypothesis
score_full(hyp: BatchHypothesis, x: Tensor, pre_x: Tensor = None) β Tuple[Dict[str, Tensor], Dict[str, Any]]
Score new hypothesis by self.full_scorers.
- Parameters:
- hyp (Hypothesis) β Hypothesis with prefix tokens to score
- x (torch.Tensor) β Corresponding input feature
- pre_x (torch.Tensor) β Encoded speech feature for sequential attn (T, D) Sequential attn computes attn first on pre_x then on x, thereby attending to two sources in sequence.
- Returns: Tuple of : score dict of hyp that has string keys of self.full_scorers and tensor score values of shape: (self.n_vocab,), and state dict that has string keys and state values of self.full_scorers
- Return type: Tuple[Dict[str, torch.Tensor], Dict[str, Any]]
score_partial(hyp: BatchHypothesis, ids: Tensor, x: Tensor, pre_x: Tensor = None) β Tuple[Dict[str, Tensor], Dict[str, Any]]
Score new hypothesis by self.full_scorers.
- Parameters:
- hyp (Hypothesis) β Hypothesis with prefix tokens to score
- ids (torch.Tensor) β 2D tensor of new partial tokens to score
- x (torch.Tensor) β Corresponding input feature
- pre_x (torch.Tensor) β Encoded speech feature for sequential attn (T, D) Sequential attn computes attn first on pre_x then on x, thereby attending to two sources in sequence.
- Returns: Tuple of : score dict of hyp that has string keys of self.full_scorers and tensor score values of shape: (self.n_vocab,), and state dict that has string keys and state values of self.full_scorers
- Return type: Tuple[Dict[str, torch.Tensor], Dict[str, Any]]
search(running_hyps: BatchHypothesis, x: Tensor, pre_x: Tensor = None) β BatchHypothesis
Search new tokens for running hypotheses and encoded speech x.
- Parameters:
- running_hyps (BatchHypothesis) β Running hypotheses on beam
- x (torch.Tensor) β Encoded speech feature (T, D)
- pre_x (torch.Tensor) β Encoded speech feature for sequential attention (T, D)
- Returns: Best sorted hypotheses
- Return type:BatchHypothesis
unbatchfy(batch_hyps: BatchHypothesis) β List[Hypothesis]
Revert batch to list.
