espnet2.gan_codec.shared.quantizer.modules.distrib.rank
Less than 1 minute
espnet2.gan_codec.shared.quantizer.modules.distrib.rank
espnet2.gan_codec.shared.quantizer.modules.distrib.rank()
Retrieve the rank of the current process in the distributed setting.
This function checks if the PyTorch distributed backend is initialized. If it is, it returns the rank (ID) of the current process within the distributed group. If distributed processing is not initialized, it returns 0, indicating the default rank for single-process execution.
- Returns: The rank of the current process. Returns 0 if not in a distributed environment.
- Return type: int
Examples
>>> import torch
>>> torch.distributed.init_process_group(backend='nccl')
>>> rank()
0 # This will return the rank of the current process.
NOTE
This function is intended to be used within a context where distributed processing is configured using PyTorch’s torch.distributed module.