easycv.apis package

Submodules

easycv.apis.export module

easycv.apis.export.export(cfg, ckpt_path, filename, model=None, **kwargs)[source]

export model for inference

Parameters
  • cfg – Config object

  • ckpt_path (str) – path to checkpoint file

  • filename (str) – filename to save exported models

  • model (nn.module) – model instance

class easycv.apis.export.PreProcess(target_size: Tuple[int, int] = (640, 640), keep_ratio: bool = True)[source]

Bases: object

Process the data input to model.

Parameters
  • target_size (Tuple[int, int]) – output spatial size.

  • keep_ratio (bool) – Whether to keep the aspect ratio when resizing the image.

__init__(target_size: Tuple[int, int] = (640, 640), keep_ratio: bool = True)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.apis.export.ModelExportWrapper(model, example_inputs, trace_model: bool = True)[source]

Bases: torch.nn.modules.module.Module

__init__(model, example_inputs, trace_model: bool = True)None[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

trace_module(**kwargs)[source]
forward(image)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class easycv.apis.export.ProcessExportWrapper(example_inputs, process_fn: Optional[Callable] = None)[source]

Bases: torch.nn.modules.module.Module

split the preprocess that can be wrapped as a preprocess jit model the preproprocess procedure cannot be optimized in an end2end blade model due to dynamic shape problem

__init__(example_inputs, process_fn: Optional[Callable] = None)None[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(image)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

easycv.apis.test module

easycv.apis.test.single_cpu_test(model, data_loader, mode='test', show=False, out_dir=None, show_score_thr=0.3, **kwargs)[source]
easycv.apis.test.single_gpu_test(model, data_loader, mode='test', use_fp16=False, **kwargs)[source]

Test model with single.

This method tests model with single

Parameters
  • model (str) – Model to be tested.

  • data_loader (nn.Dataloader) – Pytorch data loader.

  • model – mode for model to forward

  • use_fp16 – Use fp16 inference

Returns

The prediction results.

Return type

list

easycv.apis.test.multi_gpu_test(model, data_loader, mode='test', tmpdir=None, gpu_collect=False, use_fp16=False, **kwargs)[source]

Test model with multiple gpus.

This method tests model with multiple gpus and collects the results under two different modes: gpu and cpu modes. By setting ‘gpu_collect=True’ it encodes results to gpu tensors and use gpu communication for results collection. On cpu mode it saves the results on different gpus to ‘tmpdir’ and collects them by the rank 0 worker.

Parameters
  • model (str) – Model to be tested.

  • data_loader (nn.Dataloader) – Pytorch data loader.

  • model – mode for model to forward

  • tmpdir (str) – Path of directory to save the temporary results from different gpus under cpu mode.

  • gpu_collect (bool) – Option to use either gpu or cpu to collect results.

  • use_fp16 – Use fp16 inference

Returns

The prediction results.

Return type

list

easycv.apis.test.collect_results_cpu(result_part, size, tmpdir=None)[source]
easycv.apis.test.serialize_tensor(tensor_collection)[source]
easycv.apis.test.collect_results_gpu(result_part, size)[source]

easycv.apis.train module

easycv.apis.train.init_random_seed(seed=None, device='cuda')[source]

Initialize random seed. If the seed is not set, the seed will be automatically randomized, and then broadcast to all processes to prevent some potential bugs. :param seed: The seed. Default to None. :type seed: int, Optional :param device: The device where the seed will be put on.

Default to ‘cuda’.

Returns

Seed to be used.

Return type

int

easycv.apis.train.set_random_seed(seed, deterministic=False)[source]

Set random seed.

Parameters
  • seed (int) – Seed to be used.

  • deterministic (bool) – Whether to set the deterministic option for CUDNN backend, i.e., set torch.backends.cudnn.deterministic to True and torch.backends.cudnn.benchmark to False. Default: False.

easycv.apis.train.train_model(model, data_loaders, cfg, distributed=False, timestamp=None, meta=None, use_fp16=False, validate=True, gpu_collect=True)[source]

Training API.

Parameters
  • model (nn.Module) – user defined model

  • data_loaders – a list of dataloader for training data

  • cfg – config object

  • distributed – distributed training or not

  • timestamp – time str formated as ‘%Y%m%d_%H%M%S’

  • meta – a dict containing meta data info, such as env_info, seed, iter, epoch

  • use_fp16 – use fp16 training or not

  • validate – do evaluation while training

  • gpu_collect – use gpu collect or cpu collect for tensor gathering

easycv.apis.train.get_skip_list_keywords(model)[source]
easycv.apis.train.build_optimizer(model, optimizer_cfg)[source]

Build optimizer from configs.

Parameters
  • model (nn.Module) – The model with parameters to be optimized.

  • optimizer_cfg (dict) –

    The config dict of the optimizer.

    Positional fields are:
    • type: class name of the optimizer.

    • lr: base learning rate.

    Optional fields are:
    • any arguments of the corresponding optimizer type, e.g., weight_decay, momentum, etc.

    • paramwise_options: a dict with regular expression as keys to match parameter names and a dict containing options as values. Options include 6 fields: lr, lr_mult, momentum, momentum_mult, weight_decay, weight_decay_mult.

Returns

The initialized optimizer.

Return type

torch.optim.Optimizer

Example

>>> model = torch.nn.modules.Conv1d(1, 1, 1)
>>> paramwise_options = {
>>>     '(bn|gn)(\d+)?.(weight|bias)': dict(weight_decay_mult=0.1),
>>>     '\Ahead.': dict(lr_mult=10, momentum=0)}
>>> optimizer_cfg = dict(type='SGD', lr=0.01, momentum=0.9,
>>>                      weight_decay=0.0001,
>>>                      paramwise_options=paramwise_options)
>>> optimizer = build_optimizer(model, optimizer_cfg)

easycv.apis.train_misc module

easycv.apis.train_misc.build_yolo_optimizer(model, optimizer_cfg)[source]

build optimizer for yolo.