easycv.models.heads package

Submodules

easycv.models.heads.cls_head module

class easycv.models.heads.cls_head.ClsHead(with_avg_pool=False, label_smooth=0.0, in_channels=2048, with_fc=True, num_classes=1000, loss_config={'type': 'CrossEntropyLossWithLabelSmooth'}, input_feature_index=[0], init_cfg={'bias': 0.0, 'layer': 'Linear', 'std': 0.01, 'type': 'Normal'}, use_num_classes=True)[source]

Bases: torch.nn.modules.module.Module

Simplest classifier head, with only one fc layer. Should Notice Evtorch module design input always be feature_list = [tensor, tensor,…]

__init__(with_avg_pool=False, label_smooth=0.0, in_channels=2048, with_fc=True, num_classes=1000, loss_config={'type': 'CrossEntropyLossWithLabelSmooth'}, input_feature_index=[0], init_cfg={'bias': 0.0, 'layer': 'Linear', 'std': 0.01, 'type': 'Normal'}, use_num_classes=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights()[source]
forward(x: List[torch.Tensor])List[torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(cls_score: List[torch.Tensor], labels: torch.Tensor)Dict[str, torch.Tensor][source]
Parameters
  • cls_score – [N x num_classes]

  • labels – if don’t use mixup, shape is [N],else [N x num_classes]

mixup_loss(cls_score, labels_1, labels_2, lam)Dict[str, torch.Tensor][source]
training: bool

easycv.models.heads.contrastive_head module

class easycv.models.heads.contrastive_head.ContrastiveHead(temperature=0.1)[source]

Bases: torch.nn.modules.module.Module

Head for contrastive learning.

__init__(temperature=0.1)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pos, neg)[source]
Parameters
  • pos (Tensor) – Nx1 positive similarity

  • neg (Tensor) – Nxk negative similarity

training: bool
class easycv.models.heads.contrastive_head.DebiasedContrastiveHead(temperature=0.1, tau=0.1)[source]

Bases: torch.nn.modules.module.Module

__init__(temperature=0.1, tau=0.1)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pos, neg)[source]
Parameters
  • pos (Tensor) – Nx1 positive similarity

  • neg (Tensor) – Nxk negative similarity

training: bool

easycv.models.heads.latent_pred_head module

class easycv.models.heads.latent_pred_head.LatentPredictHead(predictor, size_average=True)[source]

Bases: torch.nn.modules.module.Module

Head for contrastive learning.

__init__(predictor, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights(init_linear='normal')[source]
forward(input, target)[source]
Parameters
  • input (Tensor) – NxC input features.

  • target (Tensor) – NxC target features.

training: bool
class easycv.models.heads.latent_pred_head.LatentClsHead(predictor)[source]

Bases: torch.nn.modules.module.Module

Head for contrastive learning.

__init__(predictor)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights(init_linear='normal')[source]
forward(input, target)[source]
Parameters
  • input (Tensor) – NxC input features.

  • target (Tensor) – NxC target features.

training: bool

easycv.models.heads.mp_metric_head module

easycv.models.heads.mp_metric_head.EmbeddingExplansion(embs, labels, explanion_rate=4, alpha=1.0)[source]

Expand embedding: CVPR refer to https://github.com/clovaai/embedding-expansion combine PK sampled data, mixup anchor positive pair to generate more features, always combine with BatchHardminer. result on SOP and CUB need to be add

Parameters
  • embs – [N , dims] tensor

  • labels – [N] tensor

  • explanion_rate – to expand N to explanion_rate * N

  • alpha – beta distribution parameter for mixup

Returns

[N*explanion_rate , dims]

Return type

embs

class easycv.models.heads.mp_metric_head.MpMetrixHead(with_avg_pool=False, in_channels=2048, loss_config=[{'type': 'CircleLoss', 'loss_weight': 1.0, 'norm': True, 'ddp': True, 'm': 0.4, 'gamma': 80}], input_feature_index=[0], input_label_index=0, ignore_label=None)[source]

Bases: torch.nn.modules.module.Module

Simplest classifier head, with only one fc layer.

__init__(with_avg_pool=False, in_channels=2048, loss_config=[{'type': 'CircleLoss', 'loss_weight': 1.0, 'norm': True, 'ddp': True, 'm': 0.4, 'gamma': 80}], input_feature_index=[0], input_label_index=0, ignore_label=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights(pretrained=None, init_linear='normal', std=0.01, bias=0.0)[source]
forward(x: List[torch.Tensor])List[torch.Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(cls_score, labels)Dict[str, torch.Tensor][source]
training: bool

easycv.models.heads.multi_cls_head module

class easycv.models.heads.multi_cls_head.MultiClsHead(pool_type='adaptive', in_indices=(0), with_last_layer_unpool=False, backbone='resnet50', norm_cfg={'type': 'BN'}, num_classes=1000)[source]

Bases: torch.nn.modules.module.Module

Multiple classifier heads.

FEAT_CHANNELS = {'resnet50': [64, 256, 512, 1024, 2048]}
FEAT_LAST_UNPOOL = {'resnet50': 100352}
__init__(pool_type='adaptive', in_indices=(0), with_last_layer_unpool=False, backbone='resnet50', norm_cfg={'type': 'BN'}, num_classes=1000)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

init_weights()[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(cls_score, labels)[source]
training: bool