easycv.models.loss package¶
Submodules¶
easycv.models.loss.iou_loss module¶
- class easycv.models.loss.iou_loss.IOUloss(reduction='none', loss_type='iou')[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(reduction='none', loss_type='iou')[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred, target)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
easycv.models.loss.mse_loss module¶
- class easycv.models.loss.mse_loss.JointsMSELoss(use_target_weight=False, loss_weight=1.0)[source]¶
Bases:
torch.nn.modules.module.Module
MSE loss for heatmaps.
- Parameters
use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.
loss_weight (float) – Weight of the loss. Default: 1.0.
- __init__(use_target_weight=False, loss_weight=1.0)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- training: bool¶
easycv.models.loss.pytorch_metric_learning module¶
- class easycv.models.loss.pytorch_metric_learning.FocalLoss2d(gamma=2, weight=None, size_average=None, reduce=None, reduction='mean', num_classes=2)[source]¶
Bases:
torch.nn.modules.loss._WeightedLoss
- __init__(gamma=2, weight=None, size_average=None, reduce=None, reduction='mean', num_classes=2)[source]¶
FocalLoss2d, loss solve 2-class classification unbalance problem
- Parameters
gamma – focal loss param Gamma
weight – weight same as loss._WeightedLoss
size_average – size_average same as loss._WeightedLoss
reduce – reduce same as loss._WeightedLoss
reduction – reduce same as loss._WeightedLoss
num_classes – fix num 2
- Returns
Focalloss nn.module.loss object
- reduction: str¶
- class easycv.models.loss.pytorch_metric_learning.DistributeMSELoss[source]¶
Bases:
torch.nn.modules.module.Module
- forward(input, target)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.loss.pytorch_metric_learning.CrossEntropyLossWithLabelSmooth(label_smooth=0.1, temperature=1.0, with_cls=False, embedding_size=512, num_classes=10000)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(label_smooth=0.1, temperature=1.0, with_cls=False, embedding_size=512, num_classes=10000)[source]¶
A softmax loss , with label_smooth and fc(to fit pytorch metric learning interface) :param label_smooth: label_smooth args, default=0.1 :param with_cls: if True, will generate a nn.Linear to trans input embedding from embedding_size to num_classes :param embedding_size: if input is feature not logits, then need this to indicate embedding shape :param num_classes: if input is feature not logits, then need this to indicate classification num_classes
- Returns
None
- Raises
IOError – An error occurred accessing the bigtable.Table object.
- forward(input, target)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.loss.pytorch_metric_learning.AMSoftmaxLoss(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]¶
AMsoftmax loss , with fc(to fit pytorch metric learning interface), paper: https://arxiv.org/pdf/1801.05599.pdf :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes :param margin: AMSoftmax param :param scale: AMSoftmax param, should increase num_classes
- forward(x, lb)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.loss.pytorch_metric_learning.ModelParallelSoftmaxLoss(embedding_size=512, num_classes=100000, scale=None, margin=None, bias=True)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(embedding_size=512, num_classes=100000, scale=None, margin=None, bias=True)[source]¶
ModelParallel Softmax by sailfish :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes
- forward(x, lb)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.loss.pytorch_metric_learning.ModelParallelAMSoftmaxLoss(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]¶
ModelParallel AMSoftmax by sailfish :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes
- forward(x, lb)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class easycv.models.loss.pytorch_metric_learning.SoftTargetCrossEntropy(num_classes=1000, **kwargs)[source]¶
Bases:
torch.nn.modules.module.Module
- __init__(num_classes=1000, **kwargs)[source]¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: torch.Tensor, target: torch.Tensor) → torch.Tensor[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶