easycv.models.classification package

Submodules

easycv.models.classification.classification module

class easycv.models.classification.classification.Classification(backbone, train_preprocess=[], with_sobel=False, head=None, neck=None, pretrained=True, mixup_cfg=None)[source]

Bases: easycv.models.base.BaseModel

Parameters
  • pretrained – Select one {str or True or False/None}.

  • pretrained == str (if) –

  • model from specified path; (load) –

  • pretrained == True (if) –

  • model from default path (load) –

  • pretrained == False or None (if) –

  • from init weights. (load) –

__init__(backbone, train_preprocess=[], with_sobel=False, head=None, neck=None, pretrained=True, mixup_cfg=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights()[source]
forward_backbone(img: torch.Tensor)List[torch.Tensor][source]

Forward backbone

Returns

backbone outputs

Return type

x (tuple)

forward_train(img, gt_labels)Dict[str, torch.Tensor][source]

In forward train, model will forward backbone + neck / multi-neck, get alist of output tensor, and put this list to head / multi-head, to compute each loss

forward_test(img: torch.Tensor)Dict[str, torch.Tensor][source]

forward_test means generate prob/class from image only support one neck + one head

forward_test_label(img, gt_labels)Dict[str, torch.Tensor][source]

forward_test_label means generate prob/class from image only support one neck + one head ps : head init need set the input feature idx

training: bool
aug_test(imgs)[source]
forward_feature(img)Dict[str, torch.Tensor][source]
Forward feature means forward backbone + neck/multineck ,get dict of output feature,

self.neck_num = 0: means only forward backbone, output backbone feature with avgpool, with key neck, self.neck_num > 0: means has 1/multi neck, output neck’s feature with key neck_neckidx_featureidx, suck as neck_0_0

Returns

feature tensor

Return type

x (torch.Tensor)

update_extract_list(key)[source]
forward(img: torch.Tensor, mode: str = 'train', gt_labels: Optional[torch.Tensor] = None, img_metas: Optional[torch.Tensor] = None)Dict[str, torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

easycv.models.classification.necks module

class easycv.models.classification.necks.LinearNeck(in_channels, out_channels, with_avg_pool=True, with_norm=False)[source]

Bases: torch.nn.modules.module.Module

Linear neck: fc only

__init__(in_channels, out_channels, with_avg_pool=True, with_norm=False)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights(init_linear='normal')[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.classification.necks.RetrivalNeck(in_channels, out_channels, with_avg_pool=True, cdg_config=['G', 'M'])[source]

Bases: torch.nn.modules.module.Module

RetrivalNeck: refer, Combination of Multiple Global Descriptors for Image Retrieval

https://arxiv.org/pdf/1903.10663.pdf

CGD feature : only use avg pool + gem pooling + max pooling, by pool -> fc -> norm -> concat -> norm Avg feature : use avg pooling, avg pool -> syncbn -> fc

len(cgd_config) > 0: return [CGD, Avg] len(cgd_config) = 0 : return [Avg]

__init__(in_channels, out_channels, with_avg_pool=True, cdg_config=['G', 'M'])[source]

Init RetrivalNeck, faceid neck doesn’t pool for input feature map, doesn’t support dynamic input

Parameters
  • in_channels – Int - input feature map channels

  • out_channels – Int - output feature map channels

  • with_avg_pool – bool do avg pool for BNneck or not

  • cdg_config – list(‘G’,’M’,’S’), to configure output feature, CGD = [gempooling] + [maxpooling] + [meanpooling], if len(cgd_config) > 0: return [CGD, Avg] if len(cgd_config) = 0 : return [Avg]

init_weights(init_linear='normal')[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.classification.necks.FaceIDNeck(in_channels, out_channels, map_shape=1, dropout_ratio=0.4, with_norm=False, bn_type='SyncBN')[source]

Bases: torch.nn.modules.module.Module

FaceID neck: Include BN, dropout, flatten, linear, bn

__init__(in_channels, out_channels, map_shape=1, dropout_ratio=0.4, with_norm=False, bn_type='SyncBN')[source]

Init FaceIDNeck, faceid neck doesn’t pool for input feature map, doesn’t support dynamic input

Parameters
  • in_channels – Int - input feature map channels

  • out_channels – Int - output feature map channels

  • map_shape – Int or list(int,…), input feature map (w,h) or w when w=h,

  • dropout_ratio – float, drop out ratio

  • with_norm – normalize output feature or not

  • bn_type – SyncBN or BN

init_weights(init_linear='normal')[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.classification.necks.MultiLinearNeck(in_channels, out_channels, num_layers=1, with_avg_pool=True)[source]

Bases: torch.nn.modules.module.Module

MultiLinearNeck neck: MultiFc head

__init__(in_channels, out_channels, num_layers=1, with_avg_pool=True)[source]
Parameters
  • in_channels – int or list[int]

  • out_channels – int or list[int]

  • num_layers – total fc num

  • with_avg_pool – input will be avgPool if True

Returns

None

Raises
  • len(in_channel) != len(out_channels)

  • len(in_channel) != len(num_layers)

init_weights(init_linear='normal')[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.classification.necks.HRFuseScales(in_channels, out_channels=2048, norm_cfg={'momentum': 0.1, 'type': 'BN'})[source]

Bases: torch.nn.modules.module.Module

Fuse feature map of multiple scales in HRNet. :param in_channels: The input channels of all scales. :type in_channels: list[int] :param out_channels: The channels of fused feature map.

Defaults to 2048.

Parameters
  • norm_cfg (dict) – dictionary to construct norm layers. Defaults to dict(type='BN', momentum=0.1).

  • init_cfg (dict | list[dict], optional) – Initialization config dict. Defaults to dict(type='Normal', layer='Linear', std=0.01)).

__init__(in_channels, out_channels=2048, norm_cfg={'momentum': 0.1, 'type': 'BN'})[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights(init_linear='normal')[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.classification.necks.ReIDNeck(in_channels, dropout, relu=False, norm=True, out_channels=512)[source]

Bases: torch.nn.modules.module.Module

ReID neck: Include Linear, bn, relu, dropout

__init__(in_channels, dropout, relu=False, norm=True, out_channels=512)[source]

Init FaceIDNeck, faceid neck doesn’t pool for input feature map, doesn’t support dynamic input

Parameters
  • in_channels – Int - input feature map channels

  • out_channels – Int - output feature map channels

  • map_shape – Int or list(int,…), input feature map (w,h) or w when w=h,

  • dropout_ratio – float, drop out ratio

  • with_norm – normalize output feature or not

  • bn_type – SyncBN or BN

training: bool
init_weights(init_linear='kaiming')[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.