easycv.datasets.pose.pipelines package

class easycv.datasets.pose.pipelines.PoseCollect(keys, meta_keys, meta_name='img_metas')[source]

Bases: object

Collect data from the loader relevant to the specific task.

This keeps the items in keys as it is, and collect items in meta_keys into a meta item called meta_name.This is usually the last stage of the data loader pipeline. For example, when keys=’imgs’, meta_keys=(‘filename’, ‘label’, ‘original_shape’), meta_name=’img_metas’, the results will be a dict with keys ‘imgs’ and ‘img_metas’, where ‘img_metas’ is a DataContainer of another dict with keys ‘filename’, ‘label’, ‘original_shape’.

Parameters
  • keys (Sequence[str|tuple]) – Required keys to be collected. If a tuple (key, key_new) is given as an element, the item retrieved by key will be renamed as key_new in collected data.

  • meta_name (str) – The name of the key that contains meta information. This key is always populated. Default: “img_metas”.

  • meta_keys (Sequence[str|tuple]) – Keys that are collected under meta_name. The contents of the meta_name dictionary depends on meta_keys.

__init__(keys, meta_keys, meta_name='img_metas')[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownRandomFlip(flip_prob=0.5)[source]

Bases: object

Data augmentation with random image flip.

Required keys: ‘img’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’ and ‘ann_info’. Modifies key: ‘img’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’ and ‘flipped’.

Parameters
  • flip (bool) – Option to perform random flip.

  • flip_prob (float) – Probability of flip.

__init__(flip_prob=0.5)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownHalfBodyTransform(num_joints_half_body=8, prob_half_body=0.3)[source]

Bases: object

Data augmentation with half-body transform. Keep only the upper body or the lower body at random.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, and ‘ann_info’. Modifies key: ‘scale’ and ‘center’.

Parameters
  • num_joints_half_body (int) – Threshold of performing half-body transform. If the body has fewer number of joints (< num_joints_half_body), ignore this step.

  • prob_half_body (float) – Probability of half-body transform.

__init__(num_joints_half_body=8, prob_half_body=0.3)[source]

Initialize self. See help(type(self)) for accurate signature.

static half_body_transform(cfg, joints_3d, joints_3d_visible)[source]

Get center&scale for half-body transform.

class easycv.datasets.pose.pipelines.TopDownGetRandomScaleRotation(rot_factor=40, scale_factor=0.5, rot_prob=0.6)[source]

Bases: object

Data augmentation with random scaling & rotating.

Required key: ‘scale’. Modifies key: ‘scale’ and ‘rotation’.

Parameters
  • rot_factor (int) – Rotating to [-2*rot_factor, 2*rot_factor].

  • scale_factor (float) – Scaling to [1-scale_factor, 1+scale_factor].

  • rot_prob (float) – Probability of random rotation.

__init__(rot_factor=40, scale_factor=0.5, rot_prob=0.6)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownAffine(use_udp=False)[source]

Bases: object

Affine transform the image to make input.

Required keys:’img’, ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’,’scale’, ‘rotation’ and ‘center’. Modified keys:’img’, ‘joints_3d’, and ‘joints_3d_visible’.

Parameters

use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

__init__(use_udp=False)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownGenerateTarget(sigma=2, kernel=(11, 11), valid_radius_factor=0.0546875, target_type='GaussianHeatmap', encoding='MSRA', unbiased_encoding=False)[source]

Bases: object

Generate the target heatmap.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

Parameters
  • sigma – Sigma of heatmap gaussian for ‘MSRA’ approach.

  • kernel – Kernel of heatmap gaussian for ‘Megvii’ approach.

  • encoding (str) – Approach to generate target heatmaps. Currently supported approaches: ‘MSRA’, ‘Megvii’, ‘UDP’. Default:’MSRA’

  • unbiased_encoding (bool) – Option to use unbiased encoding methods. Paper ref: Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

  • keypoint_pose_distance – Keypoint pose distance for UDP. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

  • target_type (str) – supported targets: ‘GaussianHeatmap’, ‘CombinedTarget’. Default:’GaussianHeatmap’ CombinedTarget: The combination of classification target (response map) and regression target (offset map). Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

__init__(sigma=2, kernel=(11, 11), valid_radius_factor=0.0546875, target_type='GaussianHeatmap', encoding='MSRA', unbiased_encoding=False)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownGenerateTargetRegression[source]

Bases: object

Generate the target regression vector (coordinates).

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownRandomTranslation(trans_factor=0.15, trans_prob=1.0)[source]

Bases: object

Data augmentation with random translation.

Required key: ‘scale’ and ‘center’. Modifies key: ‘center’.

Notes

bbox height: H bbox width: W

Parameters
  • trans_factor (float) – Translating center to [-trans_factor, trans_factor] * [W, H] + center.

  • trans_prob (float) – Probability of random translation.

__init__(trans_factor=0.15, trans_prob=1.0)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownRandomShiftBboxCenter(shift_factor: float = 0.16, prob: float = 0.3)[source]

Bases: object

Random shift the bbox center.

Required key: ‘center’, ‘scale’

Modifies key: ‘center’

Parameters
  • shift_factor (float) – The factor to control the shift range, which is scale*pixel_std*scale_factor. Default: 0.16

  • prob (float) – Probability of applying random shift. Default: 0.3

pixel_std: float = 200.0
__init__(shift_factor: float = 0.16, prob: float = 0.3)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.TopDownGetBboxCenterScale(padding: float = 1.25)[source]

Bases: object

Convert bbox from [x, y, w, h] to center and scale.

The center is the coordinates of the bbox center, and the scale is the bbox width and height normalized by a scale factor.

Required key: ‘bbox’, ‘ann_info’

Modifies key: ‘center’, ‘scale’

Parameters

padding (float) – bbox padding scale that will be multilied to scale. Default: 1.25

pixel_std: float = 200.0
__init__(padding: float = 1.25)[source]

Initialize self. See help(type(self)) for accurate signature.

Submodules

easycv.datasets.pose.pipelines.transforms module

class easycv.datasets.pose.pipelines.transforms.PoseCollect(keys, meta_keys, meta_name='img_metas')[source]

Bases: object

Collect data from the loader relevant to the specific task.

This keeps the items in keys as it is, and collect items in meta_keys into a meta item called meta_name.This is usually the last stage of the data loader pipeline. For example, when keys=’imgs’, meta_keys=(‘filename’, ‘label’, ‘original_shape’), meta_name=’img_metas’, the results will be a dict with keys ‘imgs’ and ‘img_metas’, where ‘img_metas’ is a DataContainer of another dict with keys ‘filename’, ‘label’, ‘original_shape’.

Parameters
  • keys (Sequence[str|tuple]) – Required keys to be collected. If a tuple (key, key_new) is given as an element, the item retrieved by key will be renamed as key_new in collected data.

  • meta_name (str) – The name of the key that contains meta information. This key is always populated. Default: “img_metas”.

  • meta_keys (Sequence[str|tuple]) – Keys that are collected under meta_name. The contents of the meta_name dictionary depends on meta_keys.

__init__(keys, meta_keys, meta_name='img_metas')[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownRandomFlip(flip_prob=0.5)[source]

Bases: object

Data augmentation with random image flip.

Required keys: ‘img’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’ and ‘ann_info’. Modifies key: ‘img’, ‘joints_3d’, ‘joints_3d_visible’, ‘center’ and ‘flipped’.

Parameters
  • flip (bool) – Option to perform random flip.

  • flip_prob (float) – Probability of flip.

__init__(flip_prob=0.5)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownHalfBodyTransform(num_joints_half_body=8, prob_half_body=0.3)[source]

Bases: object

Data augmentation with half-body transform. Keep only the upper body or the lower body at random.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, and ‘ann_info’. Modifies key: ‘scale’ and ‘center’.

Parameters
  • num_joints_half_body (int) – Threshold of performing half-body transform. If the body has fewer number of joints (< num_joints_half_body), ignore this step.

  • prob_half_body (float) – Probability of half-body transform.

__init__(num_joints_half_body=8, prob_half_body=0.3)[source]

Initialize self. See help(type(self)) for accurate signature.

static half_body_transform(cfg, joints_3d, joints_3d_visible)[source]

Get center&scale for half-body transform.

class easycv.datasets.pose.pipelines.transforms.TopDownGetRandomScaleRotation(rot_factor=40, scale_factor=0.5, rot_prob=0.6)[source]

Bases: object

Data augmentation with random scaling & rotating.

Required key: ‘scale’. Modifies key: ‘scale’ and ‘rotation’.

Parameters
  • rot_factor (int) – Rotating to [-2*rot_factor, 2*rot_factor].

  • scale_factor (float) – Scaling to [1-scale_factor, 1+scale_factor].

  • rot_prob (float) – Probability of random rotation.

__init__(rot_factor=40, scale_factor=0.5, rot_prob=0.6)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownAffine(use_udp=False)[source]

Bases: object

Affine transform the image to make input.

Required keys:’img’, ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’,’scale’, ‘rotation’ and ‘center’. Modified keys:’img’, ‘joints_3d’, and ‘joints_3d_visible’.

Parameters

use_udp (bool) – To use unbiased data processing. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

__init__(use_udp=False)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownGenerateTarget(sigma=2, kernel=(11, 11), valid_radius_factor=0.0546875, target_type='GaussianHeatmap', encoding='MSRA', unbiased_encoding=False)[source]

Bases: object

Generate the target heatmap.

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

Parameters
  • sigma – Sigma of heatmap gaussian for ‘MSRA’ approach.

  • kernel – Kernel of heatmap gaussian for ‘Megvii’ approach.

  • encoding (str) – Approach to generate target heatmaps. Currently supported approaches: ‘MSRA’, ‘Megvii’, ‘UDP’. Default:’MSRA’

  • unbiased_encoding (bool) – Option to use unbiased encoding methods. Paper ref: Zhang et al. Distribution-Aware Coordinate Representation for Human Pose Estimation (CVPR 2020).

  • keypoint_pose_distance – Keypoint pose distance for UDP. Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

  • target_type (str) – supported targets: ‘GaussianHeatmap’, ‘CombinedTarget’. Default:’GaussianHeatmap’ CombinedTarget: The combination of classification target (response map) and regression target (offset map). Paper ref: Huang et al. The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation (CVPR 2020).

__init__(sigma=2, kernel=(11, 11), valid_radius_factor=0.0546875, target_type='GaussianHeatmap', encoding='MSRA', unbiased_encoding=False)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownGenerateTargetRegression[source]

Bases: object

Generate the target regression vector (coordinates).

Required keys: ‘joints_3d’, ‘joints_3d_visible’, ‘ann_info’. Modified keys: ‘target’, and ‘target_weight’.

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownRandomTranslation(trans_factor=0.15, trans_prob=1.0)[source]

Bases: object

Data augmentation with random translation.

Required key: ‘scale’ and ‘center’. Modifies key: ‘center’.

Notes

bbox height: H bbox width: W

Parameters
  • trans_factor (float) – Translating center to [-trans_factor, trans_factor] * [W, H] + center.

  • trans_prob (float) – Probability of random translation.

__init__(trans_factor=0.15, trans_prob=1.0)[source]

Initialize self. See help(type(self)) for accurate signature.

easycv.datasets.pose.pipelines.transforms.bbox_xywh2cs(bbox, aspect_ratio, padding=1.0, pixel_std=200.0)[source]

Transform the bbox format from (x,y,w,h) into (center, scale)

Parameters
  • bbox (ndarray) – Single bbox in (x, y, w, h)

  • aspect_ratio (float) – The expected bbox aspect ratio (w over h)

  • padding (float) – Bbox padding factor that will be multilied to scale. Default: 1.0

  • pixel_std (float) – The scale normalization factor. Default: 200.0

Returns

A tuple containing center and scale. - np.ndarray[float32](2,): Center of the bbox (x, y). - np.ndarray[float32](2,): Scale of the bbox w & h.

Return type

tuple

easycv.datasets.pose.pipelines.transforms.bbox_cs2xyxy(center, scale, padding=1.0, pixel_std=200.0)[source]
class easycv.datasets.pose.pipelines.transforms.TopDownGetBboxCenterScale(padding: float = 1.25)[source]

Bases: object

Convert bbox from [x, y, w, h] to center and scale.

The center is the coordinates of the bbox center, and the scale is the bbox width and height normalized by a scale factor.

Required key: ‘bbox’, ‘ann_info’

Modifies key: ‘center’, ‘scale’

Parameters

padding (float) – bbox padding scale that will be multilied to scale. Default: 1.25

pixel_std: float = 200.0
__init__(padding: float = 1.25)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.datasets.pose.pipelines.transforms.TopDownRandomShiftBboxCenter(shift_factor: float = 0.16, prob: float = 0.3)[source]

Bases: object

Random shift the bbox center.

Required key: ‘center’, ‘scale’

Modifies key: ‘center’

Parameters
  • shift_factor (float) – The factor to control the shift range, which is scale*pixel_std*scale_factor. Default: 0.16

  • prob (float) – Probability of applying random shift. Default: 0.3

pixel_std: float = 200.0
__init__(shift_factor: float = 0.16, prob: float = 0.3)[source]

Initialize self. See help(type(self)) for accurate signature.