espnet3.demo.runtime.run_inference
Less than 1 minute
espnet3.demo.runtime.run_inference
espnet3.demo.runtime.run_inference(runtime: DemoRuntime, , ui_names: List[str], ui_values: List[Any], output_names: List[str]) → List[Any]
Run a single inference pass and map outputs into UI order.
The UI inputs are received as parallel lists (names and values). These are normalized and wrapped into a minimal dataset, then executed via:
runtime.runner_cls.forwardwhen a runner is configured, or- direct callable
runtime.modelwhen no runner is configured.
The raw output is then mapped to the UI outputs using runtime.output_keys.
- Parameters:
- runtime (DemoRuntime) – Runtime created by
build_runtime(). - ui_names (List *[*str ]) – UI input field names (keys).
- ui_values (List *[*Any ]) – UI input values, aligned with
ui_names. - output_names (List *[*str ]) – UI output component names, used to order results.
- runtime (DemoRuntime) – Runtime created by
- Returns: Output values aligned with
output_names. - Return type: List[Any]
- Raises:
- ValueError – If UI output names and configured
output_keysmismatch, or if no runner is configured and the input schema is unsupported. - RuntimeError – If a runner/model is not configured to execute inference.
- ValueError – If UI output names and configured
Example
>>> outputs = run_inference(
... runtime,
... ui_names=["speech"],
... ui_values=[waveform],
... output_names=["text"],
... )Notes
- Gradio audio inputs are normalized from
(sample_rate, np.ndarray)to a float32 waveform NumPy array.
