espnet2.torch_utils.get_flash_attn_compatability.is_flash_attn_supported
Less than 1 minute
espnet2.torch_utils.get_flash_attn_compatability.is_flash_attn_supported
espnet2.torch_utils.get_flash_attn_compatability.is_flash_attn_supported() → bool
Determines if Flash Attention is supported on the current GPU.
This function checks whether the current GPU is compatible with Flash Attention. It does so by verifying if a CUDA-capable GPU is available and if its name matches any of the supported GPU models.
espnet2.torch_utils.get_flash_attn_compatability.None
- Parameters:None
- Returns: True if the current GPU supports Flash Attention, False otherwise.
- Return type: bool
- Raises:None –
Examples
>>> is_flash_attn_supported()
True # if the current GPU is supported
False # if the current GPU is not supported
NOTE
This function currently supports GPUs from the Ampere, Ada, and Hopper architectures. The list of supported GPUs may not be exhaustive.