espnet2.enh.layers.dc_crn.GluConv2d
espnet2.enh.layers.dc_crn.GluConv2d
class espnet2.enh.layers.dc_crn.GluConv2d(in_channels, out_channels, kernel_size, stride, padding=0)
Bases: Module
Conv2d with Gated Linear Units (GLU).
This layer implements a 2D convolution operation followed by a gating mechanism. The input and output shapes are the same as those of a standard Conv2d layer.
Reference: Section III-B in [1]
Args: : in_channels (int): Number of input channels. out_channels (int): Number of output channels. kernel_size (int/tuple): Kernel size for the Conv2d operation. stride (int/tuple): Stride size for the Conv2d operation. padding (int/tuple): Padding size for the Conv2d operation.
Examples: : ```python
import torch glu_conv = GluConv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1) input_tensor = torch.randn(1, 3, 64, 64) # (B, C, H, W) output_tensor = glu_conv(input_tensor) output_tensor.shape torch.Size([1, 16, 62, 62]) # Output shape
Returns: : out (torch.Tensor): Output tensor of shape (B, C_out, H_out, W_out).
Conv2d with Gated Linear Units (GLU).
Input and output shapes are the same as regular Conv2d layers.
Reference: Section III-B in [1]
- Parameters:
- in_channels (int) – number of input channels
- out_channels (int) – number of output channels
- kernel_size (int/tuple) – kernel size in Conv2d
- stride (int/tuple) – stride size in Conv2d
- padding (int/tuple) – padding size in Conv2d
forward(x)
Conv2d with Gated Linear Units (GLU).
Input and output shapes are the same as regular Conv2d layers.
Reference: Section III-B in [1]
- Parameters:
- in_channels (int) – number of input channels
- out_channels (int) – number of output channels
- kernel_size (int/tuple) – kernel size in Conv2d
- stride (int/tuple) – stride size in Conv2d
- padding (int/tuple) – padding size in Conv2d