easycv.hooks package

class easycv.hooks.BestCkptSaverHook(by_epoch=True, save_optimizer=True, best_metric_name=[], best_metric_type=[], **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Save checkpoints periodically.

Parameters
  • by_epoch (bool) – Saving checkpoints by epoch or by iteration. Default: True.

  • save_optimizer (bool) – Whether to save optimizer state_dict in the checkpoint. It is usually used for resuming experiments. Default: True.

  • best_metric_name (List(str)) – metric name to save best, such as “neck_top1”… Default: [], do not save anything

  • best_metric_type (List(str)) – metric type to define best, should be “max”, “min” if len(best_metric_type) <= len(best_metric_type), use “max” to append.

__init__(by_epoch=True, save_optimizer=True, best_metric_name=[], best_metric_type=[], **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
after_train_epoch(runner)[source]
easycv.hooks.build_hook(cfg, default_args=None)[source]
class easycv.hooks.BYOLHook(end_momentum=1.0, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook in BYOL

This hook including momentum adjustment in BYOL following:

m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, k: current step, K: total steps.

__init__(end_momentum=1.0, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_iter(runner)[source]
class easycv.hooks.DINOHook(momentum_teacher=0.996, weight_decay=0.04, weight_decay_end=0.4, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook in DINO

__init__(momentum_teacher=0.996, weight_decay=0.04, weight_decay_end=0.4, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
before_train_iter(runner)[source]
after_train_iter(runner)[source]
before_train_epoch(runner)[source]
class easycv.hooks.EMAHook(decay=0.9999, copy_model_attr=())[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook to carry out Exponential Moving Average

__init__(decay=0.9999, copy_model_attr=())[source]
Parameters
  • decay – decay rate for exponetial moving average

  • copy_model_attr – attribute to copy from origin model to ema model

before_run(runner)[source]
before_train_epoch(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.DistEvalHook(dataloader, interval=1, mode='test', initial=False, gpu_collect=False, flush_buffer=True, broadcast_bn_buffer=True, **eval_kwargs)[source]

Bases: easycv.hooks.eval_hook.EvalHook

Distributed evaluation hook.

dataloader

A PyTorch dataloader.

Type

DataLoader

interval

Evaluation interval (by epochs). Default: 1.

Type

int

mode

model forward mode

Type

str

tmpdir

Temporary directory to save the results of all processes. Default: None.

Type

str | None

gpu_collect

Whether to use gpu or cpu to collect results. Default: False.

Type

bool

broadcast_bn_buffer

Whether to broadcast the buffer(running_mean and running_var) of rank 0 to other rank before evaluation. Default: True.

Type

bool

__init__(dataloader, interval=1, mode='test', initial=False, gpu_collect=False, flush_buffer=True, broadcast_bn_buffer=True, **eval_kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

after_train_epoch(runner)[source]
class easycv.hooks.EvalHook(dataloader, initial=False, interval=1, mode='test', flush_buffer=True, **eval_kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Evaluation hook.

dataloader

A PyTorch dataloader.

Type

DataLoader

interval

Evaluation interval (by epochs). Default: 1.

Type

int

mode

model forward mode

Type

str

flush_buffer

flush log buffer

Type

bool

__init__(dataloader, initial=False, interval=1, mode='test', flush_buffer=True, **eval_kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
after_train_epoch(runner)[source]
add_visualization_info(runner, results)[source]
evaluate(runner, results)[source]
class easycv.hooks.ExportHook(cfg, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', export_after_each_ckpt=False)[source]

Bases: mmcv.runner.hooks.hook.Hook

export model when training on pai

__init__(cfg, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', export_after_each_ckpt=False)[source]
Parameters
  • cfg – config dict

  • ckpt_filename_tmpl – checkpoint filename template

export_model(runner, epoch)[source]
after_train_iter(runner)[source]
after_train_epoch(runner)[source]
after_run(runner)[source]
class easycv.hooks.Extractor(dataset, imgs_per_gpu, workers_per_gpu, dist_mode=False)[source]

Bases: object

__init__(dataset, imgs_per_gpu, workers_per_gpu, dist_mode=False)[source]

Initialize self. See help(type(self)) for accurate signature.

class easycv.hooks.OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]

Bases: mmcv.runner.hooks.optimizer.OptimizerHook

__init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]

ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. multiply_key:[str,…] multiply_key[i], name of parameters, which will set different learning rate ratio by multipy_rate multiply_rate:[float,…] multiply_rate[i], different ratio

skip_ignore_key(runner)[source]
multiply_grad(runner)[source]
adapt_torchacc(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.OSSSyncHook(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]

Bases: mmcv.runner.hooks.hook.Hook

upload log files and checkpoints to oss when training on pai

__init__(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]
Parameters
  • work_dir – work_dir in cfg

  • oss_work_dir – oss directory where to upload local files in work_dir

  • interval – upload frequency

  • ckpt_filename_tmpl – checkpoint filename template

  • other_file_list – other file need to be upload to oss

  • iter_interval – upload frequency by iter interval, default to be None, means do it with certain assignment

upload_file(runner)[source]
after_train_iter(runner)[source]
after_train_epoch(runner)[source]
after_run(runner)[source]
class easycv.hooks.TIMEHook(end_momentum=1.0, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

This hook to show time for runner running process

__init__(end_momentum=1.0, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_iter(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.SWAVHook(gpu_batch_size=32, dump_path='data/', **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook in SWAV

__init__(gpu_batch_size=32, dump_path='data/', **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
before_train_epoch(runner)[source]
after_train_epoch(runner)[source]
class easycv.hooks.SyncNormHook(no_aug_epochs=15, interval=1, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Synchronize Norm states after training epoch, currently used in YOLOX.

Parameters
  • no_aug_epochs (int) – The number of latter epochs in the end of the training to switch to synchronizing norm interval. Default: 15.

  • interval (int) – Synchronizing norm interval. Default: 1.

__init__(no_aug_epochs=15, interval=1, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_epoch(runner)[source]
after_train_epoch(runner)[source]

Synchronizing norm.

class easycv.hooks.SyncRandomSizeHook(ratio_range=(14, 26), img_scale=(640, 640), interval=10, device='cuda', **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Change and synchronize the random image size across ranks, currently used in YOLOX.

Parameters
  • ratio_range (tuple[int]) – Random ratio range. It will be multiplied by 32, and then change the dataset output image size. Default: (14, 26).

  • img_scale (tuple[int]) – Size of input image. Default: (640, 640).

  • interval (int) – The interval of change image size. Default: 10.

  • device (torch.device | str) – device for returned tensors. Default: ‘cuda’.

__init__(ratio_range=(14, 26), img_scale=(640, 640), interval=10, device='cuda', **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

after_train_iter(runner)[source]

Change the dataset output image size.

class easycv.hooks.TensorboardLoggerHookV2(log_dir=None, interval=10, ignore_last=True, reset_flag=False, by_epoch=True)[source]

Bases: mmcv.runner.hooks.logger.tensorboard.TensorboardLoggerHook

visualization_log(runner)[source]

Images Visulization. visualization_buffer is a dictionary containing:

images (list): list of visulaized images. img_metas (list of dict, optional): dict containing ori_filename and so on.

ori_filename will be displayed as the tag of the image by default.

log(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.WandbLoggerHookV2(init_kwargs=None, interval=10, ignore_last=True, reset_flag=False, commit=True, by_epoch=True, with_step=True)[source]

Bases: mmcv.runner.hooks.logger.wandb.WandbLoggerHook

visualization_log(runner)[source]

Images Visulization. visualization_buffer is a dictionary containing:

images (list): list of visulaized images. img_metas (list of dict, optional): dict containing ori_filename and so on.

ori_filename will be displayed as the tag of the image by default.

log(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.YOLOXLrUpdaterHook(num_last_epochs, **kwargs)[source]

Bases: mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook

YOLOX learning rate scheme.

There are two main differences between YOLOXLrUpdaterHook and CosineAnnealingLrUpdaterHook.

  1. When the current running epoch is greater than

    max_epoch-last_epoch, a fixed learning rate will be used

  2. The exp warmup scheme is different with LrUpdaterHook in MMCV

Parameters

num_last_epochs (int) – The number of epochs with a fixed learning rate before the end of the training.

__init__(num_last_epochs, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

get_warmup_lr(cur_iters)[source]
get_lr(runner, base_lr)[source]
class easycv.hooks.YOLOXModeSwitchHook(no_aug_epochs=15, skip_type_keys=('MMMosaic', 'MMRandomAffine', 'MMMixUp'), **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Switch the mode of YOLOX during training.

This hook turns off the mosaic and mixup data augmentation and switches to use L1 loss in bbox_head.

Parameters

no_aug_epochs – The number of latter epochs in the end of the training to close the data augmentation and switch to L1 loss. Default: 15.

__init__(no_aug_epochs=15, skip_type_keys=('MMMosaic', 'MMRandomAffine', 'MMMixUp'), **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_epoch(runner)[source]

Close mosaic and mixup augmentation and switches to use L1 loss.

class easycv.hooks.MixupCollateHook(**kwargs)[source]

Bases: easycv.hooks.collate_hook.BaseCollateHook

Mixedup data batch, should be used after merges a list of samples to form a mini-batch of Tensor(s).

__init__(**kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

after_collate(results)[source]
class easycv.hooks.PreLoggerHook(interval=10, ignore_last=True, reset_flag=False, by_epoch=True)[source]

Bases: mmcv.runner.hooks.logger.base.LoggerHook

fetch_tensor(runner, n=0)[source]

Fetch latest n values or all values, process tensor type, convert to numpy for dump logs.

after_train_iter(runner)[source]
after_val_epoch(runner)[source]
class easycv.hooks.StepFixCosineAnnealingLrUpdaterHook(min_lr=None, min_lr_ratio=None, **kwargs)[source]

Bases: mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook

get_warmup_lr(cur_iters)[source]
get_lr(runner, base_lr)[source]
class easycv.hooks.CosineAnnealingWarmupByEpochLrUpdaterHook(min_lr=None, min_lr_ratio=None, **kwargs)[source]

Bases: mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook

before_train_iter(runner: mmcv.runner.base_runner.BaseRunner)[source]
class easycv.hooks.ThroughputHook(warmup_iters=0, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Count the throughput per second of all steps in the history. warmup_iters can be set to skip the calculation of the first few steps, if the initialization of the first few steps is slow.

__init__(warmup_iters=0, **kwargs)None[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_epoch(runner)[source]

reset per epoch

before_train_iter(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.AMPFP16OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]

Bases: easycv.hooks.optimizer_hook.OptimizerHook

__init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]

ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. loss_scale (float | dict): grade scale config. If loss_scale is a float, static loss scaling will be used with the specified scale.

It can also be a dict containing arguments of GradScalar. For Pytorch >= 1.6, we use official torch.cuda.amp.GradScaler. please refer to: https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler for the parameters.

before_run(runner)[source]
after_train_iter(runner)[source]

Submodules

easycv.hooks.best_ckpt_saver_hook module

class easycv.hooks.best_ckpt_saver_hook.BestCkptSaverHook(by_epoch=True, save_optimizer=True, best_metric_name=[], best_metric_type=[], **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Save checkpoints periodically.

Parameters
  • by_epoch (bool) – Saving checkpoints by epoch or by iteration. Default: True.

  • save_optimizer (bool) – Whether to save optimizer state_dict in the checkpoint. It is usually used for resuming experiments. Default: True.

  • best_metric_name (List(str)) – metric name to save best, such as “neck_top1”… Default: [], do not save anything

  • best_metric_type (List(str)) – metric type to define best, should be “max”, “min” if len(best_metric_type) <= len(best_metric_type), use “max” to append.

__init__(by_epoch=True, save_optimizer=True, best_metric_name=[], best_metric_type=[], **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
after_train_epoch(runner)[source]

easycv.hooks.builder module

easycv.hooks.builder.build_hook(cfg, default_args=None)[source]

easycv.hooks.byol_hook module

class easycv.hooks.byol_hook.BYOLHook(end_momentum=1.0, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook in BYOL

This hook including momentum adjustment in BYOL following:

m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, k: current step, K: total steps.

__init__(end_momentum=1.0, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_iter(runner)[source]

easycv.hooks.dino_hook module

easycv.hooks.dino_hook.cosine_scheduler(base_value, final_value, epochs, niter_per_ep, warmup_epochs=0, start_warmup_value=0)[source]
class easycv.hooks.dino_hook.DINOHook(momentum_teacher=0.996, weight_decay=0.04, weight_decay_end=0.4, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook in DINO

__init__(momentum_teacher=0.996, weight_decay=0.04, weight_decay_end=0.4, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
before_train_iter(runner)[source]
after_train_iter(runner)[source]
before_train_epoch(runner)[source]

easycv.hooks.ema_hook module

class easycv.hooks.ema_hook.ModelEMA(model, decay=0.9999, updates=0)[source]

Bases: object

Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models Keep a moving average of everything in the model state_dict (parameters and buffers). This is intended to allow functionality like https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage A smoothed version of the weights is necessary for some training schemes to perform well. This class is sensitive where it is initialized in the sequence of model init, GPU assignment and distributed training wrappers.

In Yolo5s, ema help increase mAP from 0.27 to 0.353

__init__(model, decay=0.9999, updates=0)[source]

Initialize self. See help(type(self)) for accurate signature.

update(model)[source]
update_attr(model, include=(), exclude=('process_group', 'reducer'))[source]
class easycv.hooks.ema_hook.EMAHook(decay=0.9999, copy_model_attr=())[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook to carry out Exponential Moving Average

__init__(decay=0.9999, copy_model_attr=())[source]
Parameters
  • decay – decay rate for exponetial moving average

  • copy_model_attr – attribute to copy from origin model to ema model

before_run(runner)[source]
before_train_epoch(runner)[source]
after_train_iter(runner)[source]

easycv.hooks.eval_hook module

class easycv.hooks.eval_hook.EvalHook(dataloader, initial=False, interval=1, mode='test', flush_buffer=True, **eval_kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Evaluation hook.

dataloader

A PyTorch dataloader.

Type

DataLoader

interval

Evaluation interval (by epochs). Default: 1.

Type

int

mode

model forward mode

Type

str

flush_buffer

flush log buffer

Type

bool

__init__(dataloader, initial=False, interval=1, mode='test', flush_buffer=True, **eval_kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
after_train_epoch(runner)[source]
add_visualization_info(runner, results)[source]
evaluate(runner, results)[source]
class easycv.hooks.eval_hook.DistEvalHook(dataloader, interval=1, mode='test', initial=False, gpu_collect=False, flush_buffer=True, broadcast_bn_buffer=True, **eval_kwargs)[source]

Bases: easycv.hooks.eval_hook.EvalHook

Distributed evaluation hook.

dataloader

A PyTorch dataloader.

Type

DataLoader

interval

Evaluation interval (by epochs). Default: 1.

Type

int

mode

model forward mode

Type

str

tmpdir

Temporary directory to save the results of all processes. Default: None.

Type

str | None

gpu_collect

Whether to use gpu or cpu to collect results. Default: False.

Type

bool

broadcast_bn_buffer

Whether to broadcast the buffer(running_mean and running_var) of rank 0 to other rank before evaluation. Default: True.

Type

bool

__init__(dataloader, interval=1, mode='test', initial=False, gpu_collect=False, flush_buffer=True, broadcast_bn_buffer=True, **eval_kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

after_train_epoch(runner)[source]

easycv.hooks.export_hook module

class easycv.hooks.export_hook.ExportHook(cfg, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', export_after_each_ckpt=False)[source]

Bases: mmcv.runner.hooks.hook.Hook

export model when training on pai

__init__(cfg, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', export_after_each_ckpt=False)[source]
Parameters
  • cfg – config dict

  • ckpt_filename_tmpl – checkpoint filename template

export_model(runner, epoch)[source]
after_train_iter(runner)[source]
after_train_epoch(runner)[source]
after_run(runner)[source]

easycv.hooks.extractor module

class easycv.hooks.extractor.Extractor(dataset, imgs_per_gpu, workers_per_gpu, dist_mode=False)[source]

Bases: object

__init__(dataset, imgs_per_gpu, workers_per_gpu, dist_mode=False)[source]

Initialize self. See help(type(self)) for accurate signature.

easycv.hooks.optimizer_hook module

class easycv.hooks.optimizer_hook.OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]

Bases: mmcv.runner.hooks.optimizer.OptimizerHook

__init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]

ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. multiply_key:[str,…] multiply_key[i], name of parameters, which will set different learning rate ratio by multipy_rate multiply_rate:[float,…] multiply_rate[i], different ratio

skip_ignore_key(runner)[source]
multiply_grad(runner)[source]
adapt_torchacc(runner)[source]
after_train_iter(runner)[source]
class easycv.hooks.optimizer_hook.AMPFP16OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]

Bases: easycv.hooks.optimizer_hook.OptimizerHook

__init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]

ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. loss_scale (float | dict): grade scale config. If loss_scale is a float, static loss scaling will be used with the specified scale.

It can also be a dict containing arguments of GradScalar. For Pytorch >= 1.6, we use official torch.cuda.amp.GradScaler. please refer to: https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler for the parameters.

before_run(runner)[source]
after_train_iter(runner)[source]

easycv.hooks.oss_sync_hook module

class easycv.hooks.oss_sync_hook.OSSSyncHook(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]

Bases: mmcv.runner.hooks.hook.Hook

upload log files and checkpoints to oss when training on pai

__init__(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]
Parameters
  • work_dir – work_dir in cfg

  • oss_work_dir – oss directory where to upload local files in work_dir

  • interval – upload frequency

  • ckpt_filename_tmpl – checkpoint filename template

  • other_file_list – other file need to be upload to oss

  • iter_interval – upload frequency by iter interval, default to be None, means do it with certain assignment

upload_file(runner)[source]
after_train_iter(runner)[source]
after_train_epoch(runner)[source]
after_run(runner)[source]

easycv.hooks.registry module

easycv.hooks.show_time_hook module

class easycv.hooks.show_time_hook.TIMEHook(end_momentum=1.0, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

This hook to show time for runner running process

__init__(end_momentum=1.0, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_iter(runner)[source]
after_train_iter(runner)[source]

easycv.hooks.swav_hook module

class easycv.hooks.swav_hook.SWAVHook(gpu_batch_size=32, dump_path='data/', **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Hook in SWAV

__init__(gpu_batch_size=32, dump_path='data/', **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_run(runner)[source]
before_train_epoch(runner)[source]
after_train_epoch(runner)[source]

easycv.hooks.sync_norm_hook module

easycv.hooks.sync_norm_hook.get_norm_states(module)[source]
class easycv.hooks.sync_norm_hook.SyncNormHook(no_aug_epochs=15, interval=1, **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Synchronize Norm states after training epoch, currently used in YOLOX.

Parameters
  • no_aug_epochs (int) – The number of latter epochs in the end of the training to switch to synchronizing norm interval. Default: 15.

  • interval (int) – Synchronizing norm interval. Default: 1.

__init__(no_aug_epochs=15, interval=1, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_epoch(runner)[source]
after_train_epoch(runner)[source]

Synchronizing norm.

easycv.hooks.sync_random_size_hook module

class easycv.hooks.sync_random_size_hook.SyncRandomSizeHook(ratio_range=(14, 26), img_scale=(640, 640), interval=10, device='cuda', **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Change and synchronize the random image size across ranks, currently used in YOLOX.

Parameters
  • ratio_range (tuple[int]) – Random ratio range. It will be multiplied by 32, and then change the dataset output image size. Default: (14, 26).

  • img_scale (tuple[int]) – Size of input image. Default: (640, 640).

  • interval (int) – The interval of change image size. Default: 10.

  • device (torch.device | str) – device for returned tensors. Default: ‘cuda’.

__init__(ratio_range=(14, 26), img_scale=(640, 640), interval=10, device='cuda', **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

after_train_iter(runner)[source]

Change the dataset output image size.

easycv.hooks.tensorboard module

class easycv.hooks.tensorboard.TensorboardLoggerHookV2(log_dir=None, interval=10, ignore_last=True, reset_flag=False, by_epoch=True)[source]

Bases: mmcv.runner.hooks.logger.tensorboard.TensorboardLoggerHook

visualization_log(runner)[source]

Images Visulization. visualization_buffer is a dictionary containing:

images (list): list of visulaized images. img_metas (list of dict, optional): dict containing ori_filename and so on.

ori_filename will be displayed as the tag of the image by default.

log(runner)[source]
after_train_iter(runner)[source]

easycv.hooks.wandb module

class easycv.hooks.wandb.WandbLoggerHookV2(init_kwargs=None, interval=10, ignore_last=True, reset_flag=False, commit=True, by_epoch=True, with_step=True)[source]

Bases: mmcv.runner.hooks.logger.wandb.WandbLoggerHook

visualization_log(runner)[source]

Images Visulization. visualization_buffer is a dictionary containing:

images (list): list of visulaized images. img_metas (list of dict, optional): dict containing ori_filename and so on.

ori_filename will be displayed as the tag of the image by default.

log(runner)[source]
after_train_iter(runner)[source]

easycv.hooks.yolox_lr_hook module

class easycv.hooks.yolox_lr_hook.YOLOXLrUpdaterHook(num_last_epochs, **kwargs)[source]

Bases: mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook

YOLOX learning rate scheme.

There are two main differences between YOLOXLrUpdaterHook and CosineAnnealingLrUpdaterHook.

  1. When the current running epoch is greater than

    max_epoch-last_epoch, a fixed learning rate will be used

  2. The exp warmup scheme is different with LrUpdaterHook in MMCV

Parameters

num_last_epochs (int) – The number of epochs with a fixed learning rate before the end of the training.

__init__(num_last_epochs, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

get_warmup_lr(cur_iters)[source]
get_lr(runner, base_lr)[source]

easycv.hooks.yolox_mode_switch_hook module

class easycv.hooks.yolox_mode_switch_hook.YOLOXModeSwitchHook(no_aug_epochs=15, skip_type_keys=('MMMosaic', 'MMRandomAffine', 'MMMixUp'), **kwargs)[source]

Bases: mmcv.runner.hooks.hook.Hook

Switch the mode of YOLOX during training.

This hook turns off the mosaic and mixup data augmentation and switches to use L1 loss in bbox_head.

Parameters

no_aug_epochs – The number of latter epochs in the end of the training to close the data augmentation and switch to L1 loss. Default: 15.

__init__(no_aug_epochs=15, skip_type_keys=('MMMosaic', 'MMRandomAffine', 'MMMixUp'), **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

before_train_epoch(runner)[source]

Close mosaic and mixup augmentation and switches to use L1 loss.