easycv.core.evaluation package

Submodules

easycv.core.evaluation.ap module

easycv.core.evaluation.auc_eval module

class easycv.core.evaluation.auc_eval.AucEvaluator(dataset_name=None, metric_names=['neck_auc'], neck_num=None)[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

AUC evaluator for binary classification only.

__init__(dataset_name=None, metric_names=['neck_auc'], neck_num=None)[source]
Parameters
  • dataset_name – eval dataset name

  • metric_names – eval metrics name

  • neck_num – some model contains multi-neck to support multitask, neck_num means use the no.neck_num neck output of model to eval

easycv.core.evaluation.base_evaluator module

class easycv.core.evaluation.base_evaluator.Evaluator(dataset_name=None, metric_names=[])[source]

Bases: object

Evaluator interface

__init__(dataset_name=None, metric_names=[])[source]

Construct eval ops from tensor

Parameters
  • dataset_name (str) – dataset name to be evaluated

  • metric_names (List[str]) – metric names this evaluator will return

evaluate(prediction_dict, groundtruth_dict, **kwargs)[source]
property metric_names

easycv.core.evaluation.builder module

easycv.core.evaluation.builder.build_evaluator(evaluator_cfg_list)[source]

build evaluator according to metric name

Parameters

evaluator_cfg_list – list of evaluator config dict

Returns

return a list of evaluator

easycv.core.evaluation.classification_eval module

class easycv.core.evaluation.classification_eval.ClsEvaluator(topk=(1, 5), dataset_name=None, metric_names=['neck_top1'], neck_num=None, class_list=None)[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

Classification evaluator.

__init__(topk=(1, 5), dataset_name=None, metric_names=['neck_top1'], neck_num=None, class_list=None)[source]
Parameters
  • top_k (int, tuple) – int or tuple of int, evaluate top_k acc

  • dataset_name – eval dataset name

  • metric_names – eval metrics name

  • neck_num – some model contains multi-neck to support multitask, neck_num means use the no.neck_num neck output of model to eval

class easycv.core.evaluation.classification_eval.MultiLabelEvaluator(dataset_name=None, metric_names=['mAP'])[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

Multilabel Classification evaluator.

__init__(dataset_name=None, metric_names=['mAP'])[source]
Parameters
  • dataset_name – eval dataset name

  • metric_names – eval metrics name

mAP(pred, target)[source]

Calculate the mean average precision with respect of classes. :param pred: The model prediction with shape

(N, C), where C is the number of classes.

Parameters

target (torch.Tensor | np.ndarray) – The target of each prediction with shape (N, C), where C is the number of classes. 1 stands for positive examples, 0 stands for negative examples and -1 stands for difficult examples.

Returns

A single float as mAP value.

Return type

float

average_precision(pred, target)[source]

Calculate the average precision for a single class. AP summarizes a precision-recall curve as the weighted mean of maximum precisions obtained for any r’>r, where r is the recall: .. math:

\text{AP} = \sum_n (R_n - R_{n-1}) P_n

Note that no approximation is involved since the curve is piecewise constant. :param pred: The model prediction with shape (N, ). :type pred: np.ndarray :param target: The target of each prediction with shape (N, ). :type target: np.ndarray

Returns

a single float as average precision value.

Return type

float

easycv.core.evaluation.coco_evaluation module

Class for evaluating object detections with COCO metrics.

class easycv.core.evaluation.coco_evaluation.CocoDetectionEvaluator(classes, include_metrics_per_category=False, all_metrics_per_category=False, coco_analyze=False, dataset_name=None, metric_names=['DetectionBoxes_Precision/mAP'])[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

Class to evaluate COCO detection metrics.

__init__(classes, include_metrics_per_category=False, all_metrics_per_category=False, coco_analyze=False, dataset_name=None, metric_names=['DetectionBoxes_Precision/mAP'])[source]

Constructor.

Parameters
  • classes – a list of class name

  • include_metrics_per_category – If True, include metrics for each category.

  • all_metrics_per_category – Whether to include all the summary metrics for each category in per_category_ap. Be careful with setting it to true if you have more than handful of ∏, because it will pollute your mldash.

  • coco_analyze – If True, will analyze the detection result using coco analysis.

  • dataset_name – If not None, dataset_name will be inserted to each metric name.

clear()[source]

Clears the state to prepare for a fresh evaluation.

add_single_ground_truth_image_info(image_id, groundtruth_dict)[source]

Adds groundtruth for a single image to be used for evaluation.

If the image has already been added, a warning is logged, and groundtruth is ignored.

Parameters
  • image_id – A unique string/integer identifier for the image.

  • groundtruth_dict

    A dictionary containing

    InputDataFields.groundtruth_boxes

    float32 numpy array of shape [num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.

    InputDataFields.groundtruth_classes

    integer numpy array of shape [num_boxes] containing 1-indexed groundtruth classes for the boxes. InputDataFields.groundtruth_is_crowd (optional): integer numpy array of shape [num_boxes] containing iscrowd flag for groundtruth boxes.

add_single_detected_image_info(image_id, detections_dict)[source]

Adds detections for a single image to be used for evaluation.

If a detection has already been added for this image id, a warning is logged, and the detection is skipped.

Parameters
  • image_id – A unique string/integer identifier for the image.

  • detections_dict

    A dictionary containing

    DetectionResultFields.detection_boxes

    float32 numpy array of shape [num_boxes, 4] containing num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.

    DetectionResultFields.detection_scores

    float32 numpy array of shape [num_boxes] containing detection scores for the boxes.

    DetectionResultFields.detection_classes

    integer numpy array of shape [num_boxes] containing 1-indexed detection classes for the boxes.

Raises

ValueError – If groundtruth for the image_id is not available.

class easycv.core.evaluation.coco_evaluation.CocoMaskEvaluator(classes, include_metrics_per_category=False, dataset_name=None, metric_names=['DetectionMasks_Precision/mAP'])[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

Class to evaluate COCO detection metrics.

__init__(classes, include_metrics_per_category=False, dataset_name=None, metric_names=['DetectionMasks_Precision/mAP'])[source]

Constructor.

Parameters
  • categories – A list of dicts, each of which has the following keys :id: (required) an integer id uniquely identifying this category. :name: (required) string representing category name e.g., ‘cat’, ‘dog’.

  • include_metrics_per_category – If True, include metrics for each category.

clear()[source]

Clears the state to prepare for a fresh evaluation.

add_single_ground_truth_image_info(image_id, groundtruth_dict)[source]

Adds groundtruth for a single image to be used for evaluation.

If the image has already been added, a warning is logged, and groundtruth is ignored.

Parameters
  • image_id – A unique string/integer identifier for the image.

  • groundtruth_dict

    A dictionary containing :InputDataFields.groundtruth_boxes: float32 numpy array of shape

    [num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.

    InputDataFields.groundtruth_classes

    integer numpy array of shape [num_boxes] containing 1-indexed groundtruth classes for the boxes.

    InputDataFields.groundtruth_instance_masks

    uint8 numpy array of shape [num_boxes, image_height, image_width] containing groundtruth masks corresponding to the boxes. The elements of the array must be in {0, 1}.

add_single_detected_image_info(image_id, detections_dict)[source]

Adds detections for a single image to be used for evaluation.

If a detection has already been added for this image id, a warning is logged, and the detection is skipped.

Parameters
  • image_id – A unique string/integer identifier for the image.

  • detections_dict – A dictionary containing - DetectionResultFields.detection_scores: float32 numpy array of shape [num_boxes] containing detection scores for the boxes. DetectionResultFields.detection_classes: integer numpy array of shape [num_boxes] containing 1-indexed detection classes for the boxes. DetectionResultFields.detection_masks: optional uint8 numpy array of shape [num_boxes, image_height, image_width] containing instance masks corresponding to the boxes. The elements of the array must be in {0, 1}.

Raises

ValueError – If groundtruth for the image_id is not available or if spatial shapes of groundtruth_instance_masks and detection_masks are incompatible.

class easycv.core.evaluation.coco_evaluation.CoCoPoseTopDownEvaluator(dataset_name=None, metric_names=['AP'], **kwargs)[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

Class to evaluate COCO keypoint topdown metrics.

__init__(dataset_name=None, metric_names=['AP'], **kwargs)[source]

Construct eval ops from tensor

Parameters
  • dataset_name (str) – dataset name to be evaluated

  • metric_names (List[str]) – metric names this evaluator will return

class easycv.core.evaluation.coco_evaluation.CocoPanopticEvaluator(dataset_name=None, metric_names=['PQ'], classes=None, file_client_args={'backend': 'disk'}, **kwargs)[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

Class to evaluate COCO panoptic metrics.

__init__(dataset_name=None, metric_names=['PQ'], classes=None, file_client_args={'backend': 'disk'}, **kwargs)[source]

Construct eval ops from tensor

Parameters
  • dataset_name (str) – dataset name to be evaluated

  • metric_names (List[str]) – metric names this evaluator will return

evaluate(gt_json, gt_folder, pred_json, pred_folder, categories, nproc=32, classwise=False, **kwargs)[source]
parse_pq_results(pq_results)[source]

Parse the Panoptic Quality results.

easycv.core.evaluation.coco_evaluation.pq_compute_single_core(proc_id, annotation_set, gt_folder, pred_folder, categories, file_client=None, print_log=False)[source]

The single core function to evaluate the metric of Panoptic Segmentation.

Same as the function with the same name in panopticapi. Only the function to load the images is changed to use the file client.

Parameters
  • proc_id (int) – The id of the mini process.

  • gt_folder (str) – The path of the ground truth images.

  • pred_folder (str) – The path of the prediction images.

  • categories (str) – The categories of the dataset.

  • file_client (object) – The file client of the dataset. If None, the backend will be set to disk.

  • print_log (bool) – Whether to print the log. Defaults to False.

easycv.core.evaluation.coco_evaluation.pq_compute_multi_core(matched_annotations_list, gt_folder, pred_folder, categories, file_client=None, nproc=32)[source]

Evaluate the metrics of Panoptic Segmentation with multithreading.

Same as the function with the same name in panopticapi.

Parameters
  • matched_annotations_list (list) – The matched annotation list. Each element is a tuple of annotations of the same image with the format (gt_anns, pred_anns).

  • gt_folder (str) – The path of the ground truth images.

  • pred_folder (str) – The path of the prediction images.

  • categories (str) – The categories of the dataset.

  • file_client (object) – The file client of the dataset. If None, the backend will be set to disk.

  • nproc (int) – Number of processes for panoptic quality computing. Defaults to 32. When nproc exceeds the number of cpu cores, the number of cpu cores is used.

easycv.core.evaluation.coco_tools module

Wrappers for third party pycocotools to be used within object_detection.

Note that nothing in this file is tensorflow related and thus cannot be called directly as a slim metric, for example.

TODO(jonathanhuang): wrap as a slim metric in metrics.py

Usage example: given a set of images with ids in the list image_ids and corresponding lists of numpy arrays encoding groundtruth (boxes and classes) and detections (boxes, scores and classes), where elements of each list correspond to detections/annotations of a single image, then evaluation (in multi-class mode) can be invoked as follows:

groundtruth_dict = coco_tools.ExportGroundtruthToCOCO(

image_ids, groundtruth_boxes_list, groundtruth_classes_list, max_num_classes, output_path=None)

detections_list = coco_tools.ExportDetectionsToCOCO(

image_ids, detection_boxes_list, detection_scores_list, detection_classes_list, output_path=None)

groundtruth = coco_tools.COCOWrapper(groundtruth_dict) detections = groundtruth.LoadAnnotations(detections_list) evaluator = coco_tools.COCOEvalWrapper(groundtruth, detections,

agnostic_mode=False)

metrics = evaluator.ComputeMetrics()

class easycv.core.evaluation.coco_tools.COCOWrapper(dataset, detection_type='bbox')[source]

Bases: xtcocotools.coco.COCO

Wrapper for the pycocotools COCO class.

__init__(dataset, detection_type='bbox')[source]

COCOWrapper constructor.

See http://mscoco.org/dataset/#format for a description of the format. By default, the coco.COCO class constructor reads from a JSON file. This function duplicates the same behavior but loads from a dictionary, allowing us to perform evaluation without writing to external storage.

Parameters
  • dataset – a dictionary holding bounding box annotations in the COCO format.

  • detection_type – type of detections being wrapped. Can be one of [‘bbox’, ‘segmentation’]

Raises

ValueError – if detection_type is unsupported.

LoadAnnotations(annotations)[source]

Load annotations dictionary into COCO datastructure.

See http://mscoco.org/dataset/#format for a description of the annotations format. As above, this function replicates the default behavior of the API but does not require writing to external storage.

Parameters

annotations – python list holding object detection results where each detection is encoded as a dict with required keys [‘image_id’, ‘category_id’, ‘score’] and one of [‘bbox’, ‘segmentation’] based on detection_type.

Returns

a coco.COCO datastructure holding object detection annotations results

Raises
  • ValueError – if annotations is not a list

  • ValueError – if annotations do not correspond to the images contained in self.

class easycv.core.evaluation.coco_tools.COCOEvalWrapper(groundtruth=None, detections=None, agnostic_mode=False, iou_type='bbox')[source]

Bases: easycv.core.evaluation.custom_cocotools.cocoeval.COCOeval

Wrapper for the pycocotools COCOeval class.

To evaluate, create two objects (groundtruth_dict and detections_list) using the conventions listed at http://mscoco.org/dataset/#format. Then call evaluation as follows:

groundtruth = coco_tools.COCOWrapper(groundtruth_dict) detections = groundtruth.LoadAnnotations(detections_list) evaluator = coco_tools.COCOEvalWrapper(groundtruth, detections,

agnostic_mode=False)

metrics = evaluator.ComputeMetrics()

__init__(groundtruth=None, detections=None, agnostic_mode=False, iou_type='bbox')[source]

COCOEvalWrapper constructor.

Note that for the area-based metrics to be meaningful, detection and groundtruth boxes must be in image coordinates measured in pixels.

Parameters
  • groundtruth – a coco.COCO (or coco_tools.COCOWrapper) object holding groundtruth annotations

  • detections – a coco.COCO (or coco_tools.COCOWrapper) object holding detections

  • agnostic_mode – boolean (default: False). If True, evaluation ignores class labels, treating all detections as proposals.

  • iou_type – IOU type to use for evaluation. Supports bbox or segm.

GetCategory(category_id)[source]

Fetches dictionary holding category information given category id.

Parameters

category_id – integer id

Returns

dictionary holding ‘id’, ‘name’.

GetAgnosticMode()[source]

Returns true if COCO Eval is configured to evaluate in agnostic mode.

GetCategoryIdList()[source]

Returns list of valid category ids.

ComputeMetrics(include_metrics_per_category=False, all_metrics_per_category=False)[source]

Computes detection metrics.

Parameters
  • include_metrics_per_category – If True, will include metrics per category.

  • all_metrics_per_category – If true, include all the summery metrics for each category in per_category_ap. Be careful with setting it to true if you have more than handful of categories, because it will pollute your mldash.

Returns

a dictionary holding:
’Precision/mAP’: mean average precision over classes averaged over IOU

thresholds ranging from .5 to .95 with .05 increments

’Precision/mAP@.50IOU’: mean average precision at 50% IOU ‘Precision/mAP@.75IOU’: mean average precision at 75% IOU ‘Precision/mAP (small)’: mean average precision for small objects

(area < 32^2 pixels)

’Precision/mAP (medium)’: mean average precision for medium sized

objects (32^2 pixels < area < 96^2 pixels)

’Precision/mAP (large)’: mean average precision for large objects

(96^2 pixels < area < 10000^2 pixels)

’Recall/AR@1’: average recall with 1 detection ‘Recall/AR@10’: average recall with 10 detections ‘Recall/AR@100’: average recall with 100 detections ‘Recall/AR@100 (small)’: average recall for small objects with 100

detections

’Recall/AR@100 (medium)’: average recall for medium objects with 100

detections

’Recall/AR@100 (large)’: average recall for large objects with 100

detections

  1. per_category_ap: a dictionary holding category specific results with

keys of the form: ‘Precision mAP ByCategory/category’ (without the supercategory part if no supercategories exist). For backward compatibility ‘PerformanceByCategory’ is included in the output regardless of all_metrics_per_category. If evaluating class-agnostic mode, per_category_ap is an empty dictionary.

Return type

  1. summary_metrics

Raises

ValueError – If category_stats does not exist.

Analyze()[source]

Analyze detection results.

Args:

Returns

A dictionary containing images of analyzing result images,

key is the image name, value is a [H,W,3] numpy array which represent the image content. You can refer to http://cocodataset.org/#detection-eval section 4 Analysis code.

easycv.core.evaluation.coco_tools.ExportSingleImageGroundtruthToCoco(image_id, next_annotation_id, category_id_set, groundtruth_boxes, groundtruth_classes, groundtruth_masks=None, groundtruth_is_crowd=None, super_categories=None)[source]

Export groundtruth of a single image to COCO format.

This function converts groundtruth detection annotations represented as numpy arrays to dictionaries that can be ingested by the COCO evaluation API. Note that the image_ids provided here must match the ones given to ExportSingleImageDetectionsToCoco. We assume that boxes and classes are in correspondence - that is: groundtruth_boxes[i, :], and groundtruth_classes[i] are associated with the same groundtruth annotation.

In the exported result, “area” fields are always set to the area of the groundtruth bounding box.

Parameters
  • image_id – a unique image identifier either of type integer or string.

  • next_annotation_id – integer specifying the first id to use for the groundtruth annotations. All annotations are assigned a continuous integer id starting from this value.

  • category_id_set – A set of valid class ids. Groundtruth with classes not in category_id_set are dropped.

  • groundtruth_boxes – numpy array (float32) with shape [num_gt_boxes, 4]

  • groundtruth_classes – numpy array (int) with shape [num_gt_boxes]

  • groundtruth_masks – optional uint8 numpy array of shape [num_detections, image_height, image_width] containing detection_masks.

  • groundtruth_is_crowd – optional numpy array (int) with shape [num_gt_boxes] indicating whether groundtruth boxes are crowd.

  • super_categories – optional list of str indicating each box super category

Returns

a list of groundtruth annotations for a single image in the COCO format.

Raises

ValueError – if (1) groundtruth_boxes and groundtruth_classes do not have the right lengths or (2) if each of the elements inside these lists do not have the correct shapes or (3) if image_ids are not integers

easycv.core.evaluation.coco_tools.ExportGroundtruthToCOCO(image_ids, groundtruth_boxes, groundtruth_classes, categories, output_path=None)[source]

Export groundtruth detection annotations in numpy arrays to COCO API.

This function converts a set of groundtruth detection annotations represented as numpy arrays to dictionaries that can be ingested by the COCO API. Inputs to this function are three lists: image ids for each groundtruth image, groundtruth boxes for each image and groundtruth classes respectively. Note that the image_ids provided here must match the ones given to the ExportDetectionsToCOCO function in order for evaluation to work properly. We assume that for each image, boxes, scores and classes are in correspondence — that is: image_id[i], groundtruth_boxes[i, :] and groundtruth_classes[i] are associated with the same groundtruth annotation.

In the exported result, “area” fields are always set to the area of the groundtruth bounding box and “iscrowd” fields are always set to 0. TODO(jonathanhuang): pass in “iscrowd” array for evaluating on COCO dataset.

Parameters
  • image_ids – a list of unique image identifier either of type integer or string.

  • groundtruth_boxes – list of numpy arrays with shape [num_gt_boxes, 4] (note that num_gt_boxes can be different for each entry in the list)

  • groundtruth_classes – list of numpy arrays (int) with shape [num_gt_boxes] (note that num_gt_boxes can be different for each entry in the list)

  • categories

    a list of dictionaries representing all possible categories. Each dict in this list has the following keys:

    ’id’: (required) an integer id uniquely identifying this category ‘name’: (required) string representing category name

    e.g., ‘cat’, ‘dog’, ‘pizza’

    ’supercategory’: (optional) string representing the supercategory

    e.g., ‘animal’, ‘vehicle’, ‘food’, etc

  • output_path – (optional) path for exporting result to JSON

Returns

dictionary that can be read by COCO API

Raises

ValueError – if (1) groundtruth_boxes and groundtruth_classes do not have the right lengths or (2) if each of the elements inside these lists do not have the correct shapes or (3) if image_ids are not integers

easycv.core.evaluation.coco_tools.ExportSingleImageDetectionBoxesToCoco(image_id, category_id_set, detection_boxes, detection_scores, detection_classes)[source]

Export detections of a single image to COCO format.

This function converts detections represented as numpy arrays to dictionaries that can be ingested by the COCO evaluation API. Note that the image_ids provided here must match the ones given to the ExporSingleImageDetectionBoxesToCoco. We assume that boxes, and classes are in correspondence - that is: boxes[i, :], and classes[i] are associated with the same groundtruth annotation.

Parameters
  • image_id – unique image identifier either of type integer or string.

  • category_id_set – A set of valid class ids. Detections with classes not in category_id_set are dropped.

  • detection_boxes – float numpy array of shape [num_detections, 4] containing detection boxes.

  • detection_scores – float numpy array of shape [num_detections] containing scored for the detection boxes.

  • detection_classes – integer numpy array of shape [num_detections] containing the classes for detection boxes.

Returns

a list of detection annotations for a single image in the COCO format.

Raises

ValueError – if (1) detection_boxes, detection_scores and detection_classes do not have the right lengths or (2) if each of the elements inside these lists do not have the correct shapes or (3) if image_ids are not integers.

easycv.core.evaluation.coco_tools.ExportSingleImageDetectionMasksToCoco(image_id, category_id_set, detection_masks, detection_scores, detection_classes)[source]

Export detection masks of a single image to COCO format.

This function converts detections represented as numpy arrays to dictionaries that can be ingested by the COCO evaluation API. We assume that detection_masks, detection_scores, and detection_classes are in correspondence - that is: detection_masks[i, :], detection_classes[i] and detection_scores[i]

are associated with the same annotation.

Parameters
  • image_id – unique image identifier either of type integer or string.

  • category_id_set – A set of valid class ids. Detections with classes not in category_id_set are dropped.

  • detection_masks – uint8 numpy array of shape [num_detections, image_height, image_width] containing detection_masks.

  • detection_scores – float numpy array of shape [num_detections] containing scores for detection masks.

  • detection_classes – integer numpy array of shape [num_detections] containing the classes for detection masks.

Returns

a list of detection mask annotations for a single image in the COCO format.

Raises

ValueError – if (1) detection_masks, detection_scores and detection_classes do not have the right lengths or (2) if each of the elements inside these lists do not have the correct shapes or (3) if image_ids are not integers.

easycv.core.evaluation.coco_tools.ExportDetectionsToCOCO(image_ids, detection_boxes, detection_scores, detection_classes, categories, output_path=None)[source]

Export detection annotations in numpy arrays to COCO API.

This function converts a set of predicted detections represented as numpy arrays to dictionaries that can be ingested by the COCO API. Inputs to this function are lists, consisting of boxes, scores and classes, respectively, corresponding to each image for which detections have been produced. Note that the image_ids provided here must match the ones given to the ExportGroundtruthToCOCO function in order for evaluation to work properly.

We assume that for each image, boxes, scores and classes are in correspondence — that is: detection_boxes[i, :], detection_scores[i] and detection_classes[i] are associated with the same detection.

Parameters
  • image_ids – a list of unique image identifier either of type integer or string.

  • detection_boxes – list of numpy arrays with shape [num_detection_boxes, 4]

  • detection_scores – list of numpy arrays (float) with shape [num_detection_boxes]. Note that num_detection_boxes can be different for each entry in the list.

  • detection_classes – list of numpy arrays (int) with shape [num_detection_boxes]. Note that num_detection_boxes can be different for each entry in the list.

  • categories – a list of dictionaries representing all possible categories. Each dict in this list must have an integer ‘id’ key uniquely identifying this category.

  • output_path – (optional) path for exporting result to JSON

Returns

list of dictionaries that can be read by COCO API, where each entry corresponds to a single detection and has keys from: [‘image_id’, ‘category_id’, ‘bbox’, ‘score’].

Raises

ValueError – if (1) detection_boxes and detection_classes do not have the right lengths or (2) if each of the elements inside these lists do not have the correct shapes or (3) if image_ids are not integers.

easycv.core.evaluation.coco_tools.ExportSegmentsToCOCO(image_ids, detection_masks, detection_scores, detection_classes, categories, output_path=None)[source]

Export segmentation masks in numpy arrays to COCO API.

This function converts a set of predicted instance masks represented as numpy arrays to dictionaries that can be ingested by the COCO API. Inputs to this function are lists, consisting of segments, scores and classes, respectively, corresponding to each image for which detections have been produced.

Note this function is recommended to use for small dataset. For large dataset, it should be used with a merge function (e.g. in map reduce), otherwise the memory consumption is large.

We assume that for each image, masks, scores and classes are in correspondence — that is: detection_masks[i, :, :, :], detection_scores[i] and detection_classes[i] are associated with the same detection.

Parameters
  • image_ids – list of image ids (typically ints or strings)

  • detection_masks – list of numpy arrays with shape [num_detection, h, w, 1] and type uint8. The height and width should match the shape of corresponding image.

  • detection_scores – list of numpy arrays (float) with shape [num_detection]. Note that num_detection can be different for each entry in the list.

  • detection_classes – list of numpy arrays (int) with shape [num_detection]. Note that num_detection can be different for each entry in the list.

  • categories – a list of dictionaries representing all possible categories. Each dict in this list must have an integer ‘id’ key uniquely identifying this category.

  • output_path – (optional) path for exporting result to JSON

Returns

list of dictionaries that can be read by COCO API, where each entry corresponds to a single detection and has keys from: [‘image_id’, ‘category_id’, ‘segmentation’, ‘score’].

Raises

ValueError – if detection_masks and detection_classes do not have the right lengths or if each of the elements inside these lists do not have the correct shapes.

easycv.core.evaluation.coco_tools.ExportKeypointsToCOCO(image_ids, detection_keypoints, detection_scores, detection_classes, categories, output_path=None)[source]

Exports keypoints in numpy arrays to COCO API.

This function converts a set of predicted keypoints represented as numpy arrays to dictionaries that can be ingested by the COCO API. Inputs to this function are lists, consisting of keypoints, scores and classes, respectively, corresponding to each image for which detections have been produced.

We assume that for each image, keypoints, scores and classes are in correspondence — that is: detection_keypoints[i, :, :, :], detection_scores[i] and detection_classes[i] are associated with the same detection.

Parameters
  • image_ids – list of image ids (typically ints or strings)

  • detection_keypoints – list of numpy arrays with shape [num_detection, num_keypoints, 2] and type float32 in absolute x-y coordinates.

  • detection_scores – list of numpy arrays (float) with shape [num_detection]. Note that num_detection can be different for each entry in the list.

  • detection_classes – list of numpy arrays (int) with shape [num_detection]. Note that num_detection can be different for each entry in the list.

  • categories – a list of dictionaries representing all possible categories. Each dict in this list must have an integer ‘id’ key uniquely identifying this category and an integer ‘num_keypoints’ key specifying the number of keypoints the category has.

  • output_path – (optional) path for exporting result to JSON

Returns

list of dictionaries that can be read by COCO API, where each entry corresponds to a single detection and has keys from: [‘image_id’, ‘category_id’, ‘keypoints’, ‘score’].

Raises

ValueError – if detection_keypoints and detection_classes do not have the right lengths or if each of the elements inside these lists do not have the correct shapes.

easycv.core.evaluation.faceid_pair_eval module

easycv.core.evaluation.faceid_pair_eval.calculate_roc(thresholds, embeddings1, embeddings2, actual_issame, nrof_folds=10, pca=0)[source]
easycv.core.evaluation.faceid_pair_eval.calculate_accuracy(threshold, dist, actual_issame)[source]
easycv.core.evaluation.faceid_pair_eval.calculate_val(thresholds, embeddings1, embeddings2, actual_issame, far_target, nrof_folds=10)[source]
easycv.core.evaluation.faceid_pair_eval.calculate_val_far(threshold, dist, actual_issame)[source]
easycv.core.evaluation.faceid_pair_eval.faceid_evaluate(embeddings, actual_issame, nrof_folds=10, pca=0)[source]

Do Kfold=nrof_folds faceid pair-match test for embeddings :param embeddings: [N x C] inputs embedding of all dataset :param actual_issame: [N/2, 1] label of is match :param nrof_folds: KFold number :param pca: > 0 means, do pca and trans embedding to [N, pca] feature

Returns

KFold average best accuracy and best threshold

class easycv.core.evaluation.faceid_pair_eval.FaceIDPairEvaluator(dataset_name=None, metric_names=['acc'], kfold=10, pca=0)[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

FaceIDPairEvaluator evaluator. Input nx2 pairs and label, kfold thresholds search and return average best accuracy

__init__(dataset_name=None, metric_names=['acc'], kfold=10, pca=0)[source]

Faceid small dataset evaluator, do pair match validation :param dataset_name: faceid small validate set name, include [lfw, agedb_30, cfp_ff, cfp_fw, calfw] :param kfold: Kfold for train/val split :param pca: pca dimensions, if > 0, do PCA for input feature, transfer to [n, pca]

Returns

None

easycv.core.evaluation.metric_registry module

class easycv.core.evaluation.metric_registry.MetricRegistry[source]

Bases: object

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

get(evaluator_type)[source]
register_default_best_metric(cls, metric_name, metric_cmp_op='max')[source]

Register default best metric for each evaluator

Parameters
  • cls (object) – class object

  • metric_name (str or List[str]) – default best metric name

  • metric_cmp_op (str or List[str]) – metric compare operation, should be one of [“max”, “min”]

easycv.core.evaluation.mse_eval module

class easycv.core.evaluation.mse_eval.MSEEvaluator(dataset_name=None, metric_names=['avg_mse'], neck_num=None)[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

MSEEvaluator evaluator,

__init__(dataset_name=None, metric_names=['avg_mse'], neck_num=None)[source]

easycv.core.evaluation.retrival_topk_eval module

class easycv.core.evaluation.retrival_topk_eval.RetrivalTopKEvaluator(topk=(1, 2, 4, 8), norm=0, metric='cos', pca=0, dataset_name=None, metric_names=['R@K=1'], save_results=False, save_results_dir='', feature_keyword=['neck'])[source]

Bases: easycv.core.evaluation.base_evaluator.Evaluator

RetrivalTopK evaluator, Retrival evaluate do the topK retrival, by measuring the distance of every 1 vs other. get the topK nearest, and count the match of ID. if Retrival = 1, Miss = 0. Finally average all RetrivalRate.

__init__(topk=(1, 2, 4, 8), norm=0, metric='cos', pca=0, dataset_name=None, metric_names=['R@K=1'], save_results=False, save_results_dir='', feature_keyword=['neck'])[source]
Parameters

top_k – tuple of int, evaluate top_k acc

easycv.core.evaluation.top_down_eval module

easycv.core.evaluation.top_down_eval.pose_pck_accuracy(output, target, mask, thr=0.05, normalize=None)[source]

Calculate the pose accuracy of PCK for each individual keypoint and the averaged accuracy across all keypoints from heatmaps.

Note

PCK metric measures accuracy of the localization of the body joints. The distances between predicted positions and the ground-truth ones are typically normalized by the bounding box size. The threshold (thr) of the normalized distance is commonly set as 0.05, 0.1 or 0.2 etc.

batch_size: N num_keypoints: K heatmap height: H heatmap width: W

Parameters
  • output (np.ndarray[N, K, H, W]) – Model output heatmaps.

  • target (np.ndarray[N, K, H, W]) – Groundtruth heatmaps.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • thr (float) – Threshold of PCK calculation. Default 0.05.

  • normalize (np.ndarray[N, 2]) – Normalization factor for H&W.

Returns

A tuple containing keypoint accuracy.

  • np.ndarray[K]: Accuracy of each keypoint.

  • float: Averaged accuracy across all keypoints.

  • int: Number of valid keypoints.

Return type

tuple

easycv.core.evaluation.top_down_eval.keypoint_pck_accuracy(pred, gt, mask, thr, normalize)[source]

Calculate the pose accuracy of PCK for each individual keypoint and the averaged accuracy across all keypoints for coordinates.

Note

PCK metric measures accuracy of the localization of the body joints. The distances between predicted positions and the ground-truth ones are typically normalized by the bounding box size. The threshold (thr) of the normalized distance is commonly set as 0.05, 0.1 or 0.2 etc.

batch_size: N num_keypoints: K

Parameters
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • thr (float) – Threshold of PCK calculation.

  • normalize (np.ndarray[N, 2]) – Normalization factor for H&W.

Returns

A tuple containing keypoint accuracy.

  • acc (np.ndarray[K]): Accuracy of each keypoint.

  • avg_acc (float): Averaged accuracy across all keypoints.

  • cnt (int): Number of valid keypoints.

Return type

tuple

easycv.core.evaluation.top_down_eval.keypoint_auc(pred, gt, mask, normalize, num_step=20)[source]

Calculate the pose accuracy of PCK for each individual keypoint and the averaged accuracy across all keypoints for coordinates.

Note

  • batch_size: N

  • num_keypoints: K

Parameters
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • normalize (float) – Normalization factor.

Returns

Area under curve.

Return type

float

easycv.core.evaluation.top_down_eval.keypoint_nme(pred, gt, mask, normalize_factor)[source]

Calculate the normalized mean error (NME).

Note

  • batch_size: N

  • num_keypoints: K

Parameters
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

  • normalize_factor (np.ndarray[N, 2]) – Normalization factor.

Returns

normalized mean error

Return type

float

easycv.core.evaluation.top_down_eval.keypoint_epe(pred, gt, mask)[source]

Calculate the end-point error.

Note

  • batch_size: N

  • num_keypoints: K

Parameters
  • pred (np.ndarray[N, K, 2]) – Predicted keypoint location.

  • gt (np.ndarray[N, K, 2]) – Groundtruth keypoint location.

  • mask (np.ndarray[N, K]) – Visibility of the target. False for invisible joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

Returns

Average end-point error.

Return type

float

easycv.core.evaluation.top_down_eval.post_dark_udp(coords, batch_heatmaps, kernel=3)[source]

DARK post-pocessing. Implemented by udp. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020). Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

Note

batch size: B num keypoints: K num persons: N height of heatmaps: H width of heatmaps: W B=1 for bottom_up paradigm where all persons share the same heatmap. B=N for top_down paradigm where each person has its own heatmaps.

Parameters
  • coords (np.ndarray[N, K, 2]) – Initial coordinates of human pose.

  • batch_heatmaps (np.ndarray[B, K, H, W]) – batch_heatmaps

  • kernel (int) – Gaussian kernel size (K) for modulation.

Returns

Refined coordinates.

Return type

res (np.ndarray[N, K, 2])

easycv.core.evaluation.top_down_eval.keypoints_from_heatmaps(heatmaps, center, scale, unbiased=False, post_process='default', kernel=11, valid_radius_factor=0.0546875, use_udp=False, target_type='GaussianHeatmap')[source]

Get final keypoint predictions from heatmaps and transform them back to the image.

Note

batch size: N num keypoints: K heatmap height: H heatmap width: W

Parameters
  • heatmaps (np.ndarray[N, K, H, W], dtype=float32) – model predicted heatmaps.

  • center (np.ndarray[N, 2]) – Center of the bounding box (x, y).

  • scale (np.ndarray[N, 2]) – Scale of the bounding box wrt height/width.

  • post_process (str/None) – Choice of methods to post-process heatmaps. Currently supported: None, ‘default’, ‘unbiased’, ‘megvii’.

  • unbiased (bool) – Option to use unbiased decoding. Mutually exclusive with megvii. Note: this arg is deprecated and unbiased=True can be replaced by post_process=’unbiased’ Paper ref: Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

  • kernel (int) – Gaussian kernel size (K) for modulation, which should match the heatmap gaussian sigma when training. K=17 for sigma=3 and k=11 for sigma=2.

  • valid_radius_factor (float) – The radius factor of the positive area in classification heatmap for UDP.

  • use_udp (bool) – Use unbiased data processing.

  • target_type (str) – ‘GaussianHeatmap’ or ‘CombinedTarget’. GaussianHeatmap: Classification target with gaussian distribution. CombinedTarget: The combination of classification target (response map) and regression target (offset map). Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

Returns

A tuple containing keypoint predictions and scores.

  • preds (np.ndarray[N, K, 2]): Predicted keypoint location in images.

  • maxvals (np.ndarray[N, K, 1]): Scores (confidence) of the keypoints.

Return type

tuple