easycv.models.loss package

class easycv.models.loss.CrossEntropyLoss(use_sigmoid=False, use_mask=False, reduction='mean', class_weight=None, loss_weight=1.0, loss_name='loss_ce', avg_non_ignore=False, label_ceil=False)[source]

Bases: torch.nn.modules.module.Module

CrossEntropyLoss.

Parameters
  • use_sigmoid (bool, optional) – Whether the prediction uses sigmoid of softmax. Defaults to False.

  • use_mask (bool, optional) – Whether to use mask cross entropy loss. Defaults to False.

  • reduction (str, optional) – . Defaults to ‘mean’. Options are “none”, “mean” and “sum”.

  • class_weight (list[float] | str, optional) – Weight of each class. If in str format, read them from a file. Defaults to None.

  • loss_weight (float, optional) – Weight of the loss. Defaults to 1.0.

  • loss_name (str, optional) – Name of the loss item. If you want this loss item to be included into the backward graph, loss_ must be the prefix of the name. Defaults to ‘loss_ce’.

  • avg_non_ignore (bool) – The flag decides to whether the loss is only averaged over non-ignored targets. Default: False. New in version 0.23.0.

  • label_ceil (bool) – When use bce and set label_ceil=True, it will make elements belong to (0, 1] in label change to 1. Default: False.

__init__(use_sigmoid=False, use_mask=False, reduction='mean', class_weight=None, loss_weight=1.0, loss_name='loss_ce', avg_non_ignore=False, label_ceil=False)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

extra_repr()[source]

Extra repr.

forward(cls_score, label, weight=None, avg_factor=None, reduction_override=None, ignore_index=- 100, **kwargs)[source]

Forward function.

property loss_name

Loss Name.

This function must be implemented and will return the name of this loss function. This name will be used to combine different loss items by simple sum operation. In addition, if you want this loss item to be included into the backward graph, loss_ must be the prefix of the name.

Returns

The name of this loss item.

Return type

str

training: bool
class easycv.models.loss.FacePoseLoss(pose_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

__init__(pose_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.WingLossWithPose(num_points=106, left_eye_left_corner_index=66, right_eye_right_corner_index=79, points_weight=1.0, contour_weight=1.5, eyebrow_weight=1.5, eye_weight=1.7, nose_weight=1.3, lip_weight=1.7, omega=10, epsilon=2)[source]

Bases: torch.nn.modules.module.Module

__init__(num_points=106, left_eye_left_corner_index=66, right_eye_right_corner_index=79, points_weight=1.0, contour_weight=1.5, eyebrow_weight=1.5, eye_weight=1.7, nose_weight=1.3, lip_weight=1.7, omega=10, epsilon=2)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, pose)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.FocalLoss(use_sigmoid=True, gamma=2.0, alpha=0.25, reduction='mean', loss_weight=1.0, activated=False)[source]

Bases: torch.nn.modules.module.Module

__init__(use_sigmoid=True, gamma=2.0, alpha=0.25, reduction='mean', loss_weight=1.0, activated=False)[source]

Focal Loss

Parameters
  • use_sigmoid (bool, optional) – Whether to the prediction is used for sigmoid or softmax. Defaults to True.

  • gamma (float, optional) – The gamma for calculating the modulating factor. Defaults to 2.0.

  • alpha (float, optional) – A balanced form for Focal Loss. Defaults to 0.25.

  • reduction (str, optional) – The method used to reduce the loss into a scalar. Defaults to ‘mean’. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – Weight of loss. Defaults to 1.0.

  • activated (bool, optional) – Whether the input is activated. If True, it means the input has been activated and can be treated as probabilities. Else, it should be treated as logits. Defaults to False.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning label of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.

Returns

The calculated loss

Return type

torch.Tensor

training: bool
class easycv.models.loss.VarifocalLoss(use_sigmoid=True, alpha=0.75, gamma=2.0, iou_weighted=True, reduction='mean', loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

__init__(use_sigmoid=True, alpha=0.75, gamma=2.0, iou_weighted=True, reduction='mean', loss_weight=1.0)[source]

Varifocal Loss :param use_sigmoid: Whether the prediction is

used for sigmoid or softmax. Defaults to True.

Parameters
  • alpha (float, optional) – A balance factor for the negative part of Varifocal Loss, which is different from the alpha of Focal Loss. Defaults to 0.75.

  • gamma (float, optional) – The gamma for calculating the modulating factor. Defaults to 2.0.

  • iou_weighted (bool, optional) – Whether to weight the loss of the positive examples with the iou target. Defaults to True.

  • reduction (str, optional) – The method used to reduce the loss into a scalar. Defaults to ‘mean’. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – Weight of loss. Defaults to 1.0.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function. :param pred: The prediction. :type pred: torch.Tensor :param target: The learning target of the prediction. :type target: torch.Tensor :param weight: The weight of loss for each

prediction. Defaults to None.

Parameters
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.

Returns

The calculated loss

Return type

torch.Tensor

training: bool
class easycv.models.loss.GIoULoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

__init__(eps=1e-06, reduction='mean', loss_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.IoULoss(linear=False, eps=1e-06, reduction='mean', loss_weight=1.0, mode='log')[source]

Bases: torch.nn.modules.module.Module

IoULoss.

Computing the IoU loss between a set of predicted bboxes and target bboxes.

Parameters
  • linear (bool) – If True, use linear scale of loss else determined by mode. Default: False.

  • eps (float) – Eps to avoid log(0).

  • reduction (str) – Options are “none”, “mean” and “sum”.

  • loss_weight (float) – Weight of loss.

  • mode (str) – Loss scaling mode, including “linear”, “square”, and “log”. Default: ‘log’

__init__(linear=False, eps=1e-06, reduction='mean', loss_weight=1.0, mode='log')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning target of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None. Options are “none”, “mean” and “sum”.

training: bool
class easycv.models.loss.YOLOX_IOULoss(reduction='none', loss_type='iou')[source]

Bases: torch.nn.modules.module.Module

__init__(reduction='none', loss_type='iou')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.JointsMSELoss(use_target_weight=False, loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

MSE loss for heatmaps.

Parameters
  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

__init__(use_target_weight=False, loss_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(output, target, target_weight)[source]

Forward function.

training: bool
class easycv.models.loss.FocalLoss2d(gamma=2, weight=None, size_average=None, reduce=None, reduction='mean', num_classes=2)[source]

Bases: torch.nn.modules.loss._WeightedLoss

__init__(gamma=2, weight=None, size_average=None, reduce=None, reduction='mean', num_classes=2)[source]

FocalLoss2d, loss solve 2-class classification unbalance problem

Parameters
  • gamma – focal loss param Gamma

  • weight – weight same as loss._WeightedLoss

  • size_average – size_average same as loss._WeightedLoss

  • reduce – reduce same as loss._WeightedLoss

  • reduction – reduce same as loss._WeightedLoss

  • num_classes – fix num 2

Returns

Focalloss nn.module.loss object

forward(input, target)[source]

input: [N * num_classes] target : [N * num_classes] one-hot

reduction: str
class easycv.models.loss.DistributeMSELoss[source]

Bases: torch.nn.modules.module.Module

__init__()[source]

DistributeMSELoss : for faceid age, score predict (regression by softmax)

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.CrossEntropyLossWithLabelSmooth(label_smooth=0.1, temperature=1.0, with_cls=False, embedding_size=512, num_classes=10000)[source]

Bases: torch.nn.modules.module.Module

__init__(label_smooth=0.1, temperature=1.0, with_cls=False, embedding_size=512, num_classes=10000)[source]

A softmax loss , with label_smooth and fc(to fit pytorch metric learning interface) :param label_smooth: label_smooth args, default=0.1 :param with_cls: if True, will generate a nn.Linear to trans input embedding from embedding_size to num_classes :param embedding_size: if input is feature not logits, then need this to indicate embedding shape :param num_classes: if input is feature not logits, then need this to indicate classification num_classes

Returns

None

Raises

IOError – An error occurred accessing the bigtable.Table object.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.AMSoftmaxLoss(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

Bases: torch.nn.modules.module.Module

__init__(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

AMsoftmax loss , with fc(to fit pytorch metric learning interface), paper: https://arxiv.org/pdf/1801.05599.pdf :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes :param margin: AMSoftmax param :param scale: AMSoftmax param, should increase num_classes

forward(x, lb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.ModelParallelSoftmaxLoss(embedding_size=512, num_classes=100000, scale=None, margin=None, bias=True)[source]

Bases: torch.nn.modules.module.Module

__init__(embedding_size=512, num_classes=100000, scale=None, margin=None, bias=True)[source]

ModelParallel Softmax by sailfish :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes

forward(x, lb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.ModelParallelAMSoftmaxLoss(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

Bases: torch.nn.modules.module.Module

__init__(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

ModelParallel AMSoftmax by sailfish :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes

forward(x, lb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.SoftTargetCrossEntropy(num_classes=1000, **kwargs)[source]

Bases: torch.nn.modules.module.Module

__init__(num_classes=1000, **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor, target: torch.Tensor)torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.CDNCriterion(num_classes, matcher, weight_dict, losses, eos_coef=None, loss_class_type='ce')[source]

Bases: easycv.models.loss.set_criterion.set_criterion.SetCriterion

This class computes the loss for Conditional DETR. The process happens in two steps:

  1. we compute hungarian assignment between ground truth boxes and the outputs of the model

  2. we supervise each pair of matched ground-truth / prediction (supervise class and box)

__init__(num_classes, matcher, weight_dict, losses, eos_coef=None, loss_class_type='ce')[source]

Create the criterion. :param num_classes: number of object categories, omitting the special no-object category :param matcher: module able to compute a matching between targets and proposals :param weight_dict: dict containing as key the names of the losses and as values their relative weight. :param losses: list of all the losses to be applied. See get_loss for list of available losses.

prep_for_dn(dn_meta)[source]
forward(outputs, targets, aux_num, num_boxes)[source]

This performs the loss computation. :param outputs: dict of tensors, see the output specification of the model for the format :param targets: list of dicts, such that len(targets) == batch_size.

The expected keys in each dict depends on the losses applied, see each loss’ doc

Parameters

return_indices – used for vis. if True, the layer0-5 indices will be returned as well.

training: bool
class easycv.models.loss.DNCriterion(weight_dict)[source]

Bases: torch.nn.modules.module.Module

This class computes the loss for Conditional DETR. The process happens in two steps:

  1. we compute hungarian assignment between ground truth boxes and the outputs of the model

  2. we supervise each pair of matched ground-truth / prediction (supervise class and box)

__init__(weight_dict)[source]

Create the criterion. :param num_classes: number of object categories, omitting the special no-object category :param matcher: module able to compute a matching between targets and proposals :param weight_dict: dict containing as key the names of the losses and as values their relative weight. :param losses: list of all the losses to be applied. See get_loss for list of available losses.

prepare_for_loss(mask_dict)[source]

prepare dn components to calculate loss :param mask_dict: a dict that contains dn information

tgt_loss_boxes(src_boxes, tgt_boxes, num_tgt)[source]

Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss targets dicts must contain the key “boxes” containing a tensor of dim [nb_target_boxes, 4] The target boxes are expected in format (center_x, center_y, w, h), normalized by the image size.

tgt_loss_labels(src_logits_, tgt_labels_, num_tgt, focal_alpha, log=False)[source]

Classification loss (NLL) targets dicts must contain the key “labels” containing a tensor of dim [nb_target_boxes]

forward(mask_dict, aux_num)[source]

compute dn loss in criterion :param mask_dict: a dict for dn information :param training: training or inference flag :param aux_num: aux loss number

training: bool
class easycv.models.loss.DBLoss(balance_loss=True, main_loss_type='DiceLoss', alpha=5, beta=10, ohem_ratio=3, eps=1e-06, **kwargs)[source]

Bases: torch.nn.modules.module.Module

Differentiable Binarization (DB) Loss Function :param parm: the super paramter for DB Loss :type parm: dict

__init__(balance_loss=True, main_loss_type='DiceLoss', alpha=5, beta=10, ohem_ratio=3, eps=1e-06, **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

training: bool
forward(predicts, labels)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class easycv.models.loss.HungarianMatcher(cost_dict, cost_class_type='ce_cost')[source]

Bases: torch.nn.modules.module.Module

This class computes an assignment between the targets and the predictions of the network For efficiency reasons, the targets don’t include the no_object. Because of this, in general, there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions, while the others are un-matched (and thus treated as non-objects).

__init__(cost_dict, cost_class_type='ce_cost')[source]

Creates the matcher Params:

cost_class: This is the relative weight of the classification error in the matching cost cost_bbox: This is the relative weight of the L1 error of the bounding box coordinates in the matching cost cost_giou: This is the relative weight of the giou loss of the bounding box in the matching cost

forward(outputs, targets)[source]

Performs the matching Params:

outputs: This is a dict that contains at least these entries:

“pred_logits”: Tensor of dim [batch_size, num_queries, num_classes] with the classification logits “pred_boxes”: Tensor of dim [batch_size, num_queries, 4] with the predicted box coordinates

targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
“labels”: Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth

objects in the target) containing the class labels

“boxes”: Tensor of dim [num_target_boxes, 4] containing the target box coordinates

Returns

  • index_i is the indices of the selected predictions (in order)
    • index_j is the indices of the corresponding selected targets (in order)

For each batch element, it holds:

len(index_i) = len(index_j) = min(num_queries, num_target_boxes)

Return type

A list of size batch_size, containing tuples of (index_i, index_j) where

training: bool
class easycv.models.loss.SetCriterion(num_classes, matcher, weight_dict, losses, eos_coef=None, loss_class_type='ce')[source]

Bases: torch.nn.modules.module.Module

This class computes the loss for Conditional DETR. The process happens in two steps:

  1. we compute hungarian assignment between ground truth boxes and the outputs of the model

  2. we supervise each pair of matched ground-truth / prediction (supervise class and box)

__init__(num_classes, matcher, weight_dict, losses, eos_coef=None, loss_class_type='ce')[source]

Create the criterion. :param num_classes: number of object categories, omitting the special no-object category :param matcher: module able to compute a matching between targets and proposals :param weight_dict: dict containing as key the names of the losses and as values their relative weight. :param losses: list of all the losses to be applied. See get_loss for list of available losses.

loss_labels(outputs, targets, indices, num_boxes, log=True)[source]

Classification loss (Binary focal loss) targets dicts must contain the key “labels” containing a tensor of dim [nb_target_boxes]

loss_cardinality(outputs, targets, indices, num_boxes)[source]

Compute the cardinality error, ie the absolute error in the number of predicted non-empty boxes This is not really a loss, it is intended for logging purposes only. It doesn’t propagate gradients

loss_boxes(outputs, targets, indices, num_boxes)[source]

Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss targets dicts must contain the key “boxes” containing a tensor of dim [nb_target_boxes, 4] The target boxes are expected in format (center_x, center_y, w, h), normalized by the image size.

loss_centerness(outputs, targets, indices, num_boxes)[source]
loss_iouaware(outputs, targets, indices, num_boxes)[source]
get_loss(loss, outputs, targets, indices, num_boxes, **kwargs)[source]
forward(outputs, targets, num_boxes=None, return_indices=False)[source]

This performs the loss computation. :param outputs: dict of tensors, see the output specification of the model for the format :param targets: list of dicts, such that len(targets) == batch_size.

The expected keys in each dict depends on the losses applied, see each loss’ doc

Parameters

return_indices – used for vis. if True, the layer0-5 indices will be returned as well.

training: bool
class easycv.models.loss.L1Loss(reduction='mean', loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

L1 loss.

Parameters
  • reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of loss.

__init__(reduction='mean', loss_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning target of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

training: bool
class easycv.models.loss.MultiLoss(loss_config_list, weight_1=1.0, weight_2=1.0, gtc_loss='sar', **kwargs)[source]

Bases: torch.nn.modules.module.Module

__init__(loss_config_list, weight_1=1.0, weight_2=1.0, gtc_loss='sar', **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(predicts, label_ctc=None, label_sar=None, length=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.SmoothL1Loss(beta=1.0, reduction='mean', loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

Smooth L1 loss. :param beta: The threshold in the piecewise function.

Defaults to 1.0.

Parameters
  • reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”. Defaults to “mean”.

  • loss_weight (float, optional) – The weight of loss.

__init__(beta=1.0, reduction='mean', loss_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function. :param pred: The prediction. :type pred: torch.Tensor :param target: The learning target of the prediction. :type target: torch.Tensor :param weight: The weight of loss for each

prediction. Defaults to None.

Parameters
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

training: bool
class easycv.models.loss.DiceLoss(smooth=1, exponent=2, reduction='mean', class_weight=None, loss_weight=1.0, ignore_index=255, loss_name='loss_dice', **kwargs)[source]

Bases: torch.nn.modules.module.Module

DiceLoss.

This loss is proposed in V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation.

Parameters
  • smooth (float) – A float number to smooth loss, and avoid NaN error. Default: 1

  • exponent (float) – An float number to calculate denominator value: sum{x^exponent} + sum{y^exponent}. Default: 2.

  • reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”. This parameter only works when per_image is True. Default: ‘mean’.

  • class_weight (list[float] | str, optional) – Weight of each class. If in str format, read them from a file. Defaults to None.

  • loss_weight (float, optional) – Weight of the loss. Default to 1.0.

  • ignore_index (int | None) – The label index to be ignored. Default: 255.

  • loss_name (str, optional) – Name of the loss item. If you want this loss item to be included into the backward graph, loss_ must be the prefix of the name. Defaults to ‘loss_dice’.

__init__(smooth=1, exponent=2, reduction='mean', class_weight=None, loss_weight=1.0, ignore_index=255, loss_name='loss_dice', **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

property loss_name

Loss Name.

This function must be implemented and will return the name of this loss function. This name will be used to combine different loss items by simple sum operation. In addition, if you want this loss item to be included into the backward graph, loss_ must be the prefix of the name. :returns: The name of this loss item. :rtype: str

training: bool

Submodules

easycv.models.loss.iou_loss module

class easycv.models.loss.iou_loss.YOLOX_IOULoss(reduction='none', loss_type='iou')[source]

Bases: torch.nn.modules.module.Module

__init__(reduction='none', loss_type='iou')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.iou_loss.IoULoss(linear=False, eps=1e-06, reduction='mean', loss_weight=1.0, mode='log')[source]

Bases: torch.nn.modules.module.Module

IoULoss.

Computing the IoU loss between a set of predicted bboxes and target bboxes.

Parameters
  • linear (bool) – If True, use linear scale of loss else determined by mode. Default: False.

  • eps (float) – Eps to avoid log(0).

  • reduction (str) – Options are “none”, “mean” and “sum”.

  • loss_weight (float) – Weight of loss.

  • mode (str) – Loss scaling mode, including “linear”, “square”, and “log”. Default: ‘log’

__init__(linear=False, eps=1e-06, reduction='mean', loss_weight=1.0, mode='log')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning target of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None. Options are “none”, “mean” and “sum”.

training: bool
class easycv.models.loss.iou_loss.GIoULoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

__init__(eps=1e-06, reduction='mean', loss_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

easycv.models.loss.mse_loss module

class easycv.models.loss.mse_loss.JointsMSELoss(use_target_weight=False, loss_weight=1.0)[source]

Bases: torch.nn.modules.module.Module

MSE loss for heatmaps.

Parameters
  • use_target_weight (bool) – Option to use weighted MSE loss. Different joint types may have different target weights.

  • loss_weight (float) – Weight of the loss. Default: 1.0.

__init__(use_target_weight=False, loss_weight=1.0)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(output, target, target_weight)[source]

Forward function.

training: bool

easycv.models.loss.pytorch_metric_learning module

class easycv.models.loss.pytorch_metric_learning.FocalLoss2d(gamma=2, weight=None, size_average=None, reduce=None, reduction='mean', num_classes=2)[source]

Bases: torch.nn.modules.loss._WeightedLoss

__init__(gamma=2, weight=None, size_average=None, reduce=None, reduction='mean', num_classes=2)[source]

FocalLoss2d, loss solve 2-class classification unbalance problem

Parameters
  • gamma – focal loss param Gamma

  • weight – weight same as loss._WeightedLoss

  • size_average – size_average same as loss._WeightedLoss

  • reduce – reduce same as loss._WeightedLoss

  • reduction – reduce same as loss._WeightedLoss

  • num_classes – fix num 2

Returns

Focalloss nn.module.loss object

weight: Optional[Tensor]
forward(input, target)[source]

input: [N * num_classes] target : [N * num_classes] one-hot

reduction: str
training: bool
class easycv.models.loss.pytorch_metric_learning.DistributeMSELoss[source]

Bases: torch.nn.modules.module.Module

__init__()[source]

DistributeMSELoss : for faceid age, score predict (regression by softmax)

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.pytorch_metric_learning.CrossEntropyLossWithLabelSmooth(label_smooth=0.1, temperature=1.0, with_cls=False, embedding_size=512, num_classes=10000)[source]

Bases: torch.nn.modules.module.Module

__init__(label_smooth=0.1, temperature=1.0, with_cls=False, embedding_size=512, num_classes=10000)[source]

A softmax loss , with label_smooth and fc(to fit pytorch metric learning interface) :param label_smooth: label_smooth args, default=0.1 :param with_cls: if True, will generate a nn.Linear to trans input embedding from embedding_size to num_classes :param embedding_size: if input is feature not logits, then need this to indicate embedding shape :param num_classes: if input is feature not logits, then need this to indicate classification num_classes

Returns

None

Raises

IOError – An error occurred accessing the bigtable.Table object.

forward(input, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.pytorch_metric_learning.AMSoftmaxLoss(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

Bases: torch.nn.modules.module.Module

__init__(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

AMsoftmax loss , with fc(to fit pytorch metric learning interface), paper: https://arxiv.org/pdf/1801.05599.pdf :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes :param margin: AMSoftmax param :param scale: AMSoftmax param, should increase num_classes

forward(x, lb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.pytorch_metric_learning.ModelParallelSoftmaxLoss(embedding_size=512, num_classes=100000, scale=None, margin=None, bias=True)[source]

Bases: torch.nn.modules.module.Module

__init__(embedding_size=512, num_classes=100000, scale=None, margin=None, bias=True)[source]

ModelParallel Softmax by sailfish :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes

forward(x, lb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.pytorch_metric_learning.ModelParallelAMSoftmaxLoss(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

Bases: torch.nn.modules.module.Module

__init__(embedding_size=512, num_classes=100000, margin=0.35, scale=30)[source]

ModelParallel AMSoftmax by sailfish :param embedding_size: forward input [N, embedding_size ] :param num_classes: classification num_classes

forward(x, lb)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.models.loss.pytorch_metric_learning.SoftTargetCrossEntropy(num_classes=1000, **kwargs)[source]

Bases: torch.nn.modules.module.Module

__init__(num_classes=1000, **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor, target: torch.Tensor)torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool