easycv.predictors package

Submodules

easycv.predictors.base module

class easycv.predictors.base.NumpyToPIL[source]

Bases: object

class easycv.predictors.base.Predictor(model_path, numpy_to_pil=True)[source]

Bases: object

__init__(model_path, numpy_to_pil=True)[source]

Initialize self. See help(type(self)) for accurate signature.

preprocess(image_list)[source]
predict_batch(image_batch, **forward_kwargs)[source]

predict using batched data

Parameters
  • image_batch (torch.Tensor) – tensor with shape [N, 3, H, W]

  • forward_kwargs – kwargs for additional parameters

Returns

the output of model.forward, list or tuple

Return type

output

easycv.predictors.builder module

easycv.predictors.builder.build_predictor(cfg)[source]

easycv.predictors.classifier module

class easycv.predictors.classifier.TorchClassifier(model_path, model_config=None, topk=1, label_map_path=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None, topk=1, label_map_path=None)[source]

init model

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

}

indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.detector module

class easycv.predictors.detector.TorchYoloXPredictor(model_path, max_det=100, score_thresh=0.5, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, max_det=100, score_thresh=0.5, model_config=None)[source]

init model

Parameters
  • model_path – model file path

  • max_det – maximum number of detection

  • score_thresh – score_thresh to filter box

  • model_config – config string for model to init, in json format

predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array(in rgb order), each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

class easycv.predictors.detector.TorchFaceDetector(model_path=None, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path=None, model_config=None)[source]

init model, add a facedetect and align for img input.

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1, threshold=0.95)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

Raises

if detect !=1 face in a img, then do nothing for this image

class easycv.predictors.detector.TorchYoloXClassifierPredictor(models_root_dir, max_det=100, cls_score_thresh=0.01, det_model_config=None, cls_model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(models_root_dir, max_det=100, cls_score_thresh=0.01, det_model_config=None, cls_model_config=None)[source]

init model, add a yolox and classification predictor for img input.

Parameters
  • models_root_dir – models_root_dir/detection/.pth and models_root_dir/classification/.pth

  • det_model_config – config string for detection model to init, in json format

  • cls_model_config – config string for classification model to init, in json format

predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array(in rgb order), each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.feature_extractor module

class easycv.predictors.feature_extractor.TorchFeatureExtractor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

class easycv.predictors.feature_extractor.TorchFaceFeatureExtractor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None)[source]

init model, add a facedetect and align for img input.

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1, detect_and_align=True)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array or PIL.Image, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

  • detect_and_align – True to detect and align before feature extractor

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

Raises

if detect !=1 face in a img, then do nothing for this image

class easycv.predictors.feature_extractor.TorchMultiFaceFeatureExtractor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None)[source]

init model, add a facedetect and align for img input.

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1, detect_and_align=True)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array or PIL.Image, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

  • detect_and_align – True to detect and align before feature extractor

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

Raises

if detect !=1 face in a img, then do nothing for this image

class easycv.predictors.feature_extractor.TorchFaceAttrExtractor(model_path, model_config=None, face_threshold=0.95, attr_method=['distribute_sum', 'softmax', 'softmax'], attr_name=['age', 'gender', 'emo'])[source]

Bases: easycv.predictors.interface.PredictorInterface

__init__(model_path, model_config=None, face_threshold=0.95, attr_method=['distribute_sum', 'softmax', 'softmax'], attr_name=['age', 'gender', 'emo'])[source]

init model

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

  • attr_method

    • softmax: do softmax for feature_dim 1

    • distribute_sum: do softmax and prob sum

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

batch(image_tensor_list)[source]
predict(input_data_list, batch_size=- 1)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_list – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.interface module

class easycv.predictors.interface.PredictorInterface(model_path, model_config=None)[source]

Bases: object

version = 1
__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – init model from this directory

  • model_config – config string for model to init, in json format

abstract predict(input_data, batch_size)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data – a list of numpy array, each array is a sample to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

class easycv.predictors.interface.PredictorInterfaceV2(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

version = 2
__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – init model from this directory

  • model_config – config string for model to init, in json format

get_output_type()[source]

in this function user should return a type dict, which indicates which type of data should the output of predictor be converted to * type json, data will be serialized to json str

  • type image, data will be converted to encode image binary and write to oss file, whose name is output_dir/${key}/${input_filename}_${idx}.jpg, where input_filename is the base filename extracted from url, key corresponds to the key in the dict of output_type, if the type of data indexed by key is a list, idx is the index of element in list, otherwhile ${idx} will be empty

  • type video, data will be converted to encode video binary and write to oss file,

:: return {

‘image’: ‘image’, ‘feature’: ‘json’

} indicating that the image data in the output dict will be save to image file and feature in output dict will be converted to json

abstract predict(input_data_dict_list, batch_size)[source]

using session run predict a number of samples using batch_size

Parameters
  • input_data_dict_list – a list of dict, each dict is a sample data to be predicted

  • batch_size – batch_size passed by the caller, you can also ignore this param and use a fixed number if you do not want to adjust batch_size in runtime

Returns

a list of dict, each dict is the prediction result of one sample

eg, {“output1”: value1, “output2”: value2}, the value type can be python int str float, and numpy array

Return type

result

easycv.predictors.pose_predictor module

class easycv.predictors.pose_predictor.LoadImage(color_type='color', channel_order='rgb')[source]

Bases: object

A simple pipeline to load image.

__init__(color_type='color', channel_order='rgb')[source]

Initialize self. See help(type(self)) for accurate signature.

easycv.predictors.pose_predictor.rgetattr(obj, attr, *args)[source]
class easycv.predictors.pose_predictor.OutputHook(module, outputs=None, as_tensor=False)[source]

Bases: object

__init__(module, outputs=None, as_tensor=False)[source]

Initialize self. See help(type(self)) for accurate signature.

register(module)[source]
remove()[source]
class easycv.predictors.pose_predictor.TorchPoseTopDownPredictor(model_path, model_config=None)[source]

Bases: easycv.predictors.interface.PredictorInterface

Inference a single image with a list of bounding boxes.

__init__(model_path, model_config=None)[source]

init model

Parameters
  • model_path – model file path

  • model_config – config string for model to init, in json format

predict(input_data_list, batch_size=- 1, return_heatmap=False)[source]

Inference pose.

Parameters
  • input_data_list

    A list of image infos, like: [

    {
    ‘img’ (str | np.ndarray, RGB):

    Image filename or loaded image.

    ’detection_results’(list | np.ndarray):

    All bounding boxes (with scores), shaped (N, 4) or (N, 5). (left, top, width, height, [score]) where N is number of bounding boxes.

    ]

  • batch_size – batch size

  • return_heatmap – return heatmap value or not, default false.

Returns

{

‘pose_results’: list of ndarray[NxKx3]: Predicted pose x, y, score ‘pose_heatmap’ (optional): list of heatmap[N, K, H, W]: Model output heatmap

}

class easycv.predictors.pose_predictor.TorchPoseTopDownPredictorWithDetector(model_path, model_config={'detection': {'model_type': None, 'reserved_classes': [], 'score_thresh': 0.0}, 'pose': {'bbox_thr': 0.3, 'format': 'xywh'}})[source]

Bases: easycv.predictors.interface.PredictorInterface

SUPPORT_DETECTION_PREDICTORS = {'TorchYoloXPredictor': <class 'easycv.predictors.detector.TorchYoloXPredictor'>}
__init__(model_path, model_config={'detection': {'model_type': None, 'reserved_classes': [], 'score_thresh': 0.0}, 'pose': {'bbox_thr': 0.3, 'format': 'xywh'}})[source]

init model

Parameters
  • model_path – pose and detection model file path, split with ,, make sure the first is pose model, second is detection model

  • model_config – config string for model to init, in json format

process_det_results(outputs, input_data_list, reserved_classes=[])[source]
predict(input_data_list, batch_size=- 1, return_heatmap=False)[source]

Inference with pose model and detection model.

Parameters
  • input_data_list – A list of images(np.ndarray, RGB)

  • batch_size – batch size

  • return_heatmap – return heatmap value or not, default false.

Returns

{

‘pose_results’: list of ndarray[NxKx3]: Predicted pose x, y, score ‘pose_heatmap’ (optional): list of heatmap[N, K, H, W]: Model output heatmap

}

easycv.predictors.pose_predictor.vis_pose_result(model, img, result, radius=4, thickness=1, kpt_score_thr=0.3, bbox_color='green', dataset_info=None, show=False, out_file=None)[source]

Visualize the detection results on the image.

Parameters
  • model (nn.Module) – The loaded detector.

  • img (str | np.ndarray) – Image filename or loaded image.

  • result (list[dict]) – The results to draw over img (bbox_result, pose_result).

  • radius (int) – Radius of circles.

  • thickness (int) – Thickness of lines.

  • kpt_score_thr (float) – The threshold to visualize the keypoints.

  • skeleton (list[tuple()]) – Default None.

  • show (bool) – Whether to show the image. Default True.

  • out_file (str|None) – The filename of the output visualization image.