easycv.predictors package

Submodules

easycv.predictors.base module

class easycv.predictors.base.NumpyToPIL[source]

Bases: object

class easycv.predictors.base.Predictor(model_path, numpy_to_pil=True)[source]

Bases: object

__init__(model_path, numpy_to_pil=True)[source]

Initialize self. See help(type(self)) for accurate signature.

preprocess(image_list)[source]
predict_batch(image_batch, **forward_kwargs)[source]

predict using batched data :param image_batch: tensor with shape [N, 3, H, W] :type image_batch: torch.Tensor :param forward_kwargs: kwargs for additional parameters

Returns

the output of model.forward, list or tuple

Return type

output

class easycv.predictors.base.InputProcessor(cfg, pipelines=None, batch_size=1, threads=8, mode='BGR')[source]

Bases: object

Base input processor for processing input samples. :param cfg: Config instance. :type cfg: Config :param pipelines: Data pipeline configs. :type pipelines: list[dict] :param batch_size: batch size for forward. :type batch_size: int :param threads: Number of processes to process inputs. :type threads: int :param mode: The image mode into the model. :type mode: str

__init__(cfg, pipelines=None, batch_size=1, threads=8, mode='BGR')[source]

Initialize self. See help(type(self)) for accurate signature.

build_processor()[source]

Build processor to process loaded input. If you need custom preprocessing ops, you need to reimplement it.

process_single(input)[source]

Process single input sample. If you need custom ops to load or process a single input sample, you need to reimplement it.

class easycv.predictors.base.OutputProcessor[source]

Bases: object

Base output processor for processing model outputs.

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

process_single(inputs)[source]

Process outputs of single sample. If you need add some processing ops, you need to reimplement it.

class easycv.predictors.base.PredictorV2(model_path, config_file=None, batch_size=1, device=None, save_results=False, save_path=None, pipelines=None, input_processor_threads=8, mode='BGR')[source]

Bases: object

Base predict pipeline. :param model_path: Path of model path. :type model_path: str :param config_file: config file path for model and processor to init. Defaults to None. :type config_file: Optinal[str] :param batch_size: batch size for forward. :type batch_size: int :param device: Support str(‘cuda’ or ‘cpu’) or torch.device, if is None, detect device automatically. :type device: str | torch.device :param save_results: Whether to save predict results. :type save_results: bool :param save_path: File path for saving results, only valid when save_results is True. :type save_path: str :param pipelines: Data pipeline configs. :type pipelines: list[dict] :param input_processor_threads: Number of processes to process inputs. :type input_processor_threads: int :param mode: The image mode into the model. :type mode: str

__init__(model_path, config_file=None, batch_size=1, device=None, save_results=False, save_path=None, pipelines=None, input_processor_threads=8, mode='BGR')[source]

Initialize self. See help(type(self)) for accurate signature.

get_input_processor()[source]
get_output_processor()[source]
prepare_model()[source]

Build model from config file by default. If the model is not loaded from a configuration file, e.g. torch jit model, you need to reimplement it.

model_forward(inputs)[source]

Model forward. If you need refactor model forward, you need to reimplement it.

dump(obj, save_path, mode='wb')[source]

easycv.predictors.builder module

easycv.predictors.builder.build_predictor(cfg, default_args=None)[source]

easycv.predictors.classifier module

class easycv.predictors.classifier.ClsInputProcessor(cfg, pipelines=None, batch_size=1, pil_input=True, threads=8, mode='BGR')[source]

Bases: easycv.predictors.base.InputProcessor

Process inputs for classification models.

Parameters
  • cfg (Config) – Config instance.

  • pipelines (list[dict]) – Data pipeline configs.

  • batch_size (int) – batch size for forward.

  • pil_input (bool) – Whether use PIL image. If processor need PIL input, set true, default false.

  • threads (int) – Number of processes to process inputs.

  • mode (str) – The image mode into the model.

__init__(cfg, pipelines=None, batch_size=1, pil_input=True, threads=8, mode='BGR')[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.predictors.classifier.ClsOutputProcessor(topk=1, label_map={})[source]

Bases: easycv.predictors.base.OutputProcessor

Output processor for processing classification model outputs.

Parameters
  • topk (int) – Return top-k results. Default: 1.

  • label_map (dict) – Dict of class id to class name.

__init__(topk=1, label_map={})[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.predictors.classifier.ClassificationPredictor(model_path, config_file=None, batch_size=1, device=None, save_results=False, save_path=None, pipelines=None, topk=1, pil_input=True, label_map_path=None, input_processor_threads=8, mode='BGR', *args, **kwargs)[source]

Bases: easycv.predictors.base.PredictorV2

Predictor for classification. :param model_path: Path of model path. :type model_path: str :param config_file: config file path for model and processor to init. Defaults to None. :type config_file: Optinal[str] :param batch_size: batch size for forward. :type batch_size: int :param device: Support ‘cuda’ or ‘cpu’, if is None, detect device automatically. :type device: str :param save_results: Whether to save predict results. :type save_results: bool :param save_path: File path for saving results, only valid when save_results is True. :type save_path: str :param pipelines: Data pipeline configs. :type pipelines: list[dict] :param topk: Return top-k results. Default: 1. :type topk: int :param pil_input: Whether use PIL image. If processor need PIL input, set true, default false. :type pil_input: bool :param label_map_path: File path of saving labels list. :type label_map_path: str :param input_processor_threads: Number of processes to process inputs. :type input_processor_threads: int :param mode: The image mode into the model. :type mode: str

__init__(model_path, config_file=None, batch_size=1, device=None, save_results=False, save_path=None, pipelines=None, topk=1, pil_input=True, label_map_path=None, input_processor_threads=8, mode='BGR', *args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

get_input_processor()[source]
get_output_processor()[source]

easycv.predictors.detector module

easycv.predictors.detector.onnx_to_numpy(tensor)[source]
class easycv.predictors.detector.DetInputProcessor(cfg, pipelines=None, batch_size=1, threads=8, mode='BGR')[source]

Bases: easycv.predictors.base.InputProcessor

build_processor()[source]

Build processor to process loaded input. If you need custom preprocessing ops, you need to reimplement it.

class easycv.predictors.detector.DetOutputProcessor(score_thresh, classes=None)[source]

Bases: easycv.predictors.base.OutputProcessor

__init__(score_thresh, classes=None)[source]

Initialize self. See help(type(self)) for accurate signature.

process_single(inputs)[source]

Process outputs of single sample. If you need add some processing ops, you need to reimplement it.

class easycv.predictors.detector.DetectionPredictor(model_path, config_file=None, batch_size=1, device=None, save_results=False, save_path=None, pipelines=None, score_threshold=0.5, input_processor_threads=8, mode='BGR', *arg, **kwargs)[source]

Bases: easycv.predictors.base.PredictorV2

Generic Detection Predictor, it will filter bbox results by score_threshold .

Parameters
  • model_path (str) – Path of model path.

  • config_file (Optinal[str]) – config file path for model and processor to init. Defaults to None.

  • batch_size (int) – batch size for forward.

  • device (str | torch.device) – Support str(‘cuda’ or ‘cpu’) or torch.device, if is None, detect device automatically.

  • save_results (bool) – Whether to save predict results.

  • save_path (str) – File path for saving results, only valid when save_results is True.

  • pipelines (list[dict]) – Data pipeline configs.

  • input_processor_threads (int) – Number of processes to process inputs.

  • mode (str) – The image mode into the model.

__init__(model_path, config_file=None, batch_size=1, device=None, save_results=False, save_path=None, pipelines=None, score_threshold=0.5, input_processor_threads=8, mode='BGR', *arg, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

get_input_processor()[source]
get_output_processor()[source]
visualize(img, results, show=False, out_file=None)[source]

Only support show one sample now.

class easycv.predictors.detector.YoloXInputProcessor(cfg, pipelines=None, batch_size=1, model_type='raw', jit_processor_path=None, device=None, threads=8, mode='BGR')[source]

Bases: easycv.predictors.detector.DetInputProcessor

Input processor for yolox.

Parameters
  • cfg (Config) – Config instance.

  • pipelines (list[dict]) – Data pipeline configs.

  • batch_size (int) – batch size for forward.

  • model_type (str) – “raw” or “jit” or “blade”

  • jit_processor_path (str) – File of the saved processing operator of torch jit type.

  • device (str | torch.device) – Support str(‘cuda’ or ‘cpu’) or torch.device, if is None, detect device automatically.

  • threads (int) – Number of processes to process inputs.

  • mode (str) – The image mode into the model.

__init__(cfg, pipelines=None, batch_size=1, model_type='raw', jit_processor_path=None, device=None, threads=8, mode='BGR')[source]

Initialize self. See help(type(self)) for accurate signature.

build_processor()[source]

Build processor to process loaded input. If you need custom preprocessing ops, you need to reimplement it.

class easycv.predictors.detector.YoloXOutputProcessor(score_thresh=0.5, model_type='raw', test_conf=0.01, nms_thre=0.65, use_trt_efficientnms=False, classes=None)[source]

Bases: easycv.predictors.detector.DetOutputProcessor

__init__(score_thresh=0.5, model_type='raw', test_conf=0.01, nms_thre=0.65, use_trt_efficientnms=False, classes=None)[source]

Initialize self. See help(type(self)) for accurate signature.

post_assign(outputs, img_metas)[source]
process_single(inputs)[source]

Process outputs of single sample. If you need add some processing ops, you need to reimplement it.

class easycv.predictors.detector.YoloXPredictor(model_path, config_file=None, batch_size=1, use_trt_efficientnms=False, device=None, save_results=False, save_path=None, pipelines=None, max_det=100, score_thresh=0.5, nms_thresh=None, test_conf=None, input_processor_threads=8, mode='BGR', model_type=None)[source]

Bases: easycv.predictors.detector.DetectionPredictor

Detection predictor for Yolox.

Parameters
  • model_path (str) – Path of model path.

  • config_file (Optinal[str]) – config file path for model and processor to init. Defaults to None.

  • batch_size (int) – batch size for forward.

  • use_trt_efficientnms (bool) – Whether used tensorrt efficient nms operation in the saved model.

  • device (str | torch.device) – Support str(‘cuda’ or ‘cpu’) or torch.device, if is None, detect device automatically.

  • save_results (bool) – Whether to save predict results.

  • save_path (str) – File path for saving results, only valid when save_results is True.

  • pipelines (list[dict]) – Data pipeline configs.

  • max_det (int) – Maximum number of detection output boxes.

  • score_thresh (float) – Score threshold to filter box.

  • nms_thresh (float) – Nms threshold to filter box.

  • input_processor_threads (int) – Number of processes to process inputs.

  • mode (str) – The image mode into the model.

__init__(model_path, config_file=None, batch_size=1, use_trt_efficientnms=False, device=None, save_results=False, save_path=None, pipelines=None, max_det=100, score_thresh=0.5, nms_thresh=None, test_conf=None, input_processor_threads=8, mode='BGR', model_type=None)[source]

Initialize self. See help(type(self)) for accurate signature.

prepare_model()[source]

Build model from config file by default. If the model is not loaded from a configuration file, e.g. torch jit model, you need to reimplement it.

model_forward(inputs)[source]

Model forward. If you need refactor model forward, you need to reimplement it.

get_input_processor()[source]
get_output_processor()[source]
class easycv.predictors.detector.TorchFaceDetector(model_path=None, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path=None, model_config=None)[source]

init model, add a facedetect and align for img input.

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1, threshold=0.95)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

Raises

if detect !=1 face in a img, then do nothing for this image

class easycv.predictors.detector.TorchYoloXClassifierPredictor(models_root_dir, max_det=100, cls_score_thresh=0.01, det_model_config=None, cls_model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(models_root_dir, max_det=100, cls_score_thresh=0.01, det_model_config=None, cls_model_config=None)[source]

init model, add a yolox and classification predictor for img input.

Parameters
  • models_root_dir – models_root_dir/detection/.pth and models_root_dir/classification/.pth

  • det_model_config – config string for detection model to init, in json format

  • cls_model_config – config string for classification model to init, in json format

predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array(in rgb order), each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.feature_extractor module

class easycv.predictors.feature_extractor.TorchFeatureExtractor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

class easycv.predictors.feature_extractor.TorchFaceFeatureExtractor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None)[source]

init model, add a facedetect and align for img input.

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1, detect_and_align=True)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array or PIL.Image, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

  • detect_and_align – True to detect and align before feature extractor

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

Raises

if detect !=1 face in a img, then do nothing for this image

class easycv.predictors.feature_extractor.TorchMultiFaceFeatureExtractor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None)[source]

init model, add a facedetect and align for img input.

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1, detect_and_align=True)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array or PIL.Image, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

  • detect_and_align – True to detect and align before feature extractor

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

Raises

if detect !=1 face in a img, then do nothing for this image

class easycv.predictors.feature_extractor.TorchFaceAttrExtractor(model_path, model_config=None, face_threshold=0.95, attr_method=['distribute_sum', 'softmax', 'softmax'], attr_name=['age', 'gender', 'emo'])[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None, face_threshold=0.95, attr_method=['distribute_sum', 'softmax', 'softmax'], attr_name=['age', 'gender', 'emo'])[source]

init model

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

  • attr_method

    • softmax: do softmax for feature_dim 1

    • distribute_sum: do softmax and prob sum

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.interface module

class easycv.predictors.interface.PredictorInterface(model_path, model_config=None)[source]

Bases: object

version = 1
__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – init model from this directory

  • model_config – config string for model to init, in json format

abstract predict(input_data, batch_size)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

class easycv.predictors.interface.PredictorInterfaceV2(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

version = 2
__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – init model from this directory

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

abstract predict(input_data_dict_list, batch_size)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_dict_list – a list of dict, each dict is a sample data to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.pose_predictor module

easycv.predictors.pose_predictor.vis_pose_result(model, img, result, radius=4, thickness=1, kpt_score_thr=0.3, bbox_color='green', dataset_info=None, out_file=None, pose_kpt_color=None, pose_link_color=None, text_color='white', font_scale=0.5, bbox_thickness=1, win_name='', show=False, wait_time=0)[source]

Visualize the detection results on the image.

Parameters
  • model (nn.Module) – The loaded detector.

  • img (str | np.ndarray) – Image filename or loaded image.

  • result (list[dict]) – The results to draw over img (bbox_result, pose_result).

  • radius (int) – Radius of circles.

  • thickness (int) – Thickness of lines.

  • kpt_score_thr (float) – The threshold to visualize the keypoints.

  • skeleton (list[tuple()]) – Default None.

  • out_file (str or None) – The filename of the output visualization image.

  • show (bool) – Whether to show the image. Default: False.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • out_file – The filename to write the image. Default: None.

class easycv.predictors.pose_predictor.PoseTopDownInputProcessor(cfg, dataset_info, detection_predictor_config, bbox_thr=None, pipelines=None, batch_size=1, cat_id=None, mode='BGR')[source]

Bases: easycv.predictors.base.InputProcessor

__init__(cfg, dataset_info, detection_predictor_config, bbox_thr=None, pipelines=None, batch_size=1, cat_id=None, mode='BGR')[source]

Initialize self. See help(type(self)) for accurate signature.

get_detection_outputs(input, cat_id=None)[source]
process_single(input)[source]

Process single input sample. If you need custom ops to load or process a single input sample, you need to reimplement it.

class easycv.predictors.pose_predictor.PoseTopDownOutputProcessor[source]

Bases: easycv.predictors.base.OutputProcessor

class easycv.predictors.pose_predictor.PoseTopDownPredictor(model_path, config_file=None, detection_predictor_config=None, batch_size=1, bbox_thr=None, cat_id=None, device=None, pipelines=None, save_results=False, save_path=None, mode='BGR', model_type=None, *args, **kwargs)[source]

Bases: easycv.predictors.base.PredictorV2

Pose topdown predictor. :param model_path: Path of model path. :type model_path: str :param config_file: Config file path for model and processor to init. Defaults to None. :type config_file: Optinal[str] :param detection_model_config: Dict of person detection model predictor config,

example like dict(type="", model_path="", config_file="", ......)

Parameters
  • batch_size (int) – Batch size for forward.

  • bbox_thr (float) – Bounding box threshold to filter output results of detection model

  • cat_id (int | str) – Category id or name to filter target objects.

  • device (str | torch.device) – Support str(‘cuda’ or ‘cpu’) or torch.device, if is None, detect device automatically.

  • save_results (bool) – Whether to save predict results.

  • save_path (str) – File path for saving results, only valid when save_results is True.

  • pipelines (list[dict]) – Data pipeline configs.

  • mode (str) – The image mode into the model.

__init__(model_path, config_file=None, detection_predictor_config=None, batch_size=1, bbox_thr=None, cat_id=None, device=None, pipelines=None, save_results=False, save_path=None, mode='BGR', model_type=None, *args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

prepare_model()[source]

Build model from config file by default. If the model is not loaded from a configuration file, e.g. torch jit model, you need to reimplement it.

model_forward(inputs, return_heatmap=False)[source]

Model forward. If you need refactor model forward, you need to reimplement it.

get_input_processor()[source]
get_output_processor()[source]
show_result(image, keypoints, radius=4, thickness=3, kpt_score_thr=0.3, bbox_color='green', show=False, save_path=None)[source]