easycv.models.utils package¶
Submodules¶
easycv.models.utils.accuracy module¶
easycv.models.utils.activation module¶
- class easycv.models.utils.activation.FReLU(in_channel)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(in_channel)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- easycv.models.utils.activation.build_activation_layer(cfg)[source]¶
Build activation layer.
- Parameters
cfg (dict) – The activation layer config, which should contain: - type (str): Layer type. - layer args: Args needed to instantiate an activation layer.
- Returns
Created activation layer.
- Return type
nn.Module
easycv.models.utils.conv_module module¶
- easycv.models.utils.conv_module.build_conv_layer(cfg, *args, **kwargs)[source]¶
Build convolution layer
- Parameters
cfg (None or dict) – cfg should contain: type (str): identify conv layer type. layer args: args needed to instantiate a conv layer.
- Returns
created conv layer
- Return type
layer (nn.Module)
- class easycv.models.utils.conv_module.ConvModule(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias='auto', conv_cfg=None, norm_cfg=None, act_cfg={'type': 'ReLU'}, inplace=True, order=('conv', 'norm', 'act'))[source]¶
Bases:
torch.nn.modules.module.Module
A conv block that contains conv/norm/activation layers.
- Parameters
in_channels (int) – Same as nn.Conv2d.
out_channels (int) – Same as nn.Conv2d.
kernel_size (int or tuple[int]) – Same as nn.Conv2d.
stride (int or tuple[int]) – Same as nn.Conv2d.
padding (int or tuple[int]) – Same as nn.Conv2d.
dilation (int or tuple[int]) – Same as nn.Conv2d.
groups (int) – Same as nn.Conv2d.
bias (bool or str) – If specified as auto, it will be decided by the norm_cfg. Bias will be set as True if norm_cfg is None, otherwise False.
conv_cfg (dict) – Config dict for convolution layer.
norm_cfg (dict) – Config dict for normalization layer.
act_cfg (dict) – Config of activation layers. Default: dict(type=’ReLU’)
inplace (bool) – Whether to use inplace mode for activation.
order (tuple[str]) – The order of conv/norm/activation layers. It is a sequence of “conv”, “norm” and “act”. Examples are (“conv”, “norm”, “act”) and (“act”, “conv”, “norm”).
- __init__(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias='auto', conv_cfg=None, norm_cfg=None, act_cfg={'type': 'ReLU'}, inplace=True, order=('conv', 'norm', 'act'))[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- property norm¶
- forward(x, activate=True, norm=True)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
easycv.models.utils.conv_ws module¶
- easycv.models.utils.conv_ws.conv_ws_2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, eps=1e-05)[source]¶
- class easycv.models.utils.conv_ws.ConvWS2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, eps=1e-05)[source]¶
Bases:
torch.nn.modules.conv.Conv2d
- __init__(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, eps=1e-05)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- bias: Optional[torch.Tensor]¶
- in_channels: int¶
- out_channels: int¶
- kernel_size: Tuple[int, ...]¶
- stride: Tuple[int, ...]¶
- padding: Union[str, Tuple[int, ...]]¶
- dilation: Tuple[int, ...]¶
- transposed: bool¶
- output_padding: Tuple[int, ...]¶
- groups: int¶
- padding_mode: str¶
- weight: torch.Tensor¶
easycv.models.utils.dist_utils module¶
- class easycv.models.utils.dist_utils.DistributedLossWrapper(loss, **kwargs)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(loss, **kwargs)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(embeddings, labels, *args, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.utils.dist_utils.DistributedMinerWrapper(miner)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(miner)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(embeddings, labels, ref_emb=None, ref_labels=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
easycv.models.utils.gather_layer module¶
- class easycv.models.utils.gather_layer.GatherLayer(*args, **kwargs)[source]¶
Bases:
torch.autograd.function.Function
Gather tensors from all process, supporting backward propagation.
- static forward(ctx, input)[source]¶
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward()
if they are intended to be used for injvp
.
- static backward(ctx, *grads)[source]¶
Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
easycv.models.utils.init_weights module¶
easycv.models.utils.multi_pooling module¶
- class easycv.models.utils.multi_pooling.GeMPooling(p=3, eps=1e-06)[source]¶
Bases:
torch.nn.modules.module.Module
GemPooling used for image retrival p = 1, avgpooling p > 1 : increases the contrast of the pooled feature map and focuses on the salient features of the image p = infinite : spatial max-pooling layer
- __init__(p=3, eps=1e-06)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.utils.multi_pooling.MultiPooling(pool_type='adaptive', in_indices=(0), backbone='resnet50')[source]¶
Bases:
torch.nn.modules.module.Module
Pooling layers for features from multiple depth.
- POOL_PARAMS = {'resnet50': [{'kernel_size': 10, 'stride': 10, 'padding': 4}, {'kernel_size': 16, 'stride': 8, 'padding': 0}, {'kernel_size': 13, 'stride': 5, 'padding': 0}, {'kernel_size': 8, 'stride': 3, 'padding': 0}, {'kernel_size': 6, 'stride': 1, 'padding': 0}]}¶
- POOL_SIZES = {'resnet50': [12, 6, 4, 3, 2]}¶
- POOL_DIMS = {'resnet50': [9216, 9216, 8192, 9216, 8192]}¶
- __init__(pool_type='adaptive', in_indices=(0), backbone='resnet50')[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.utils.multi_pooling.MultiAvgPooling(pool_type='adaptive', in_indices=(0), backbone='resnet50')[source]¶
Bases:
torch.nn.modules.module.Module
Pooling layers for features from multiple depth.
- POOL_PARAMS = {'resnet50': [{'kernel_size': 10, 'stride': 10, 'padding': 4}, {'kernel_size': 16, 'stride': 8, 'padding': 0}, {'kernel_size': 13, 'stride': 5, 'padding': 0}, {'kernel_size': 8, 'stride': 3, 'padding': 0}, {'kernel_size': 7, 'stride': 1, 'padding': 0}]}¶
- POOL_SIZES = {'resnet50': [12, 6, 4, 3, 1]}¶
- POOL_DIMS = {'resnet50': [9216, 9216, 8192, 9216, 2048]}¶
- __init__(pool_type='adaptive', in_indices=(0), backbone='resnet50')[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- training: bool¶
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
easycv.models.utils.norm module¶
- class easycv.models.utils.norm.SyncIBN(planes, ratio=0.5, eps=1e-05)[source]¶
Bases:
torch.nn.modules.module.Module
Instance-Batch Normalization layer from “Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net” <https://arxiv.org/pdf/1807.09441.pdf> :param planes: Number of channels for the input tensor :type planes: int :param ratio: Ratio of instance normalization in the IBN layer :type ratio: float
- __init__(planes, ratio=0.5, eps=1e-05)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.utils.norm.IBN(planes, ratio=0.5, eps=1e-05)[source]¶
Bases:
torch.nn.modules.module.Module
Instance-Batch Normalization layer from “Two at Once: Enhancing Learning and Generalization Capacities via IBN-Net” <https://arxiv.org/pdf/1807.09441.pdf> :param planes: Number of channels for the input tensor :type planes: int :param ratio: Ratio of instance normalization in the IBN layer :type ratio: float
- __init__(planes, ratio=0.5, eps=1e-05)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- easycv.models.utils.norm.build_norm_layer(cfg, num_features, postfix='')[source]¶
Build normalization layer
- Parameters
cfg (dict) – cfg should contain: type (str): identify norm layer type. layer args: args needed to instantiate a norm layer. requires_grad (bool): [optional] whether stop gradient updates
num_features (int) – number of channels from input.
postfix (int, str) – appended into norm abbreviation to create named layer.
- Returns
abbreviation + postfix layer (nn.Module): created norm layer
- Return type
name (str)
easycv.models.utils.ops module¶
- easycv.models.utils.ops.resize_tensor(input, size=None, scale_factor=None, mode='nearest', align_corners=None, warning=True)[source]¶
Resize tensor with F.interpolate.
- Parameters
input (Tensor) – the input tensor.
size (Tuple[int, int]) – output spatial size.
scale_factor (float or Tuple[float]) – multiplier for spatial size. If scale_factor is a tuple, its length has to match input.dim().
mode (str) – algorithm used for upsampling: ‘nearest’ | ‘linear’ | ‘bilinear’ | ‘bicubic’ | ‘trilinear’ | ‘area’. Default: ‘nearest’
align_corners (bool) –
Geometrically, we consider the pixels of the input and output as squares rather than points. If set to True, the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels.
If set to False, the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation independent of input size when scale_factor is kept the same. This only has an effect when mode is ‘linear’, ‘bilinear’, ‘bicubic’ or ‘trilinear’.
easycv.models.utils.pos_embed module¶
- easycv.models.utils.pos_embed.get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False)[source]¶
grid_size: int of the grid height and width return: pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
easycv.models.utils.res_layer module¶
- class easycv.models.utils.res_layer.ResLayer(block, num_blocks, in_channels, out_channels, expansion=None, stride=1, avg_down=False, conv_cfg=None, norm_cfg={'type': 'BN'}, **kwargs)[source]¶
Bases:
torch.nn.modules.container.Sequential
ResLayer to build ResNet style backbone. :param block: Residual block used to build ResLayer. :type block: nn.Module :param num_blocks: Number of blocks. :type num_blocks: int :param in_channels: Input channels of this block. :type in_channels: int :param out_channels: Output channels of this block. :type out_channels: int :param expansion: The expansion for BasicBlock/Bottleneck.
If not specified, it will firstly be obtained via
block.expansion
. If the block has no attribute “expansion”, the following default values will be used: 1 for BasicBlock and 4 for Bottleneck. Default: None.- Parameters
stride (int) – stride of the first block. Default: 1.
avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False
conv_cfg (dict, optional) – dictionary to construct and config conv layer. Default: None
norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)
easycv.models.utils.scale module¶
- class easycv.models.utils.scale.Scale(scale=1.0)[source]¶
Bases:
torch.nn.modules.module.Module
A learnable scale parameter
- __init__(scale=1.0)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
easycv.models.utils.sobel module¶
- class easycv.models.utils.sobel.Sobel[source]¶
Bases:
torch.nn.modules.module.Module
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶