easycv.hooks package¶
- class easycv.hooks.BestCkptSaverHook(by_epoch=True, save_optimizer=True, best_metric_name=[], best_metric_type=[], **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Save checkpoints periodically.
- Parameters
by_epoch (bool) – Saving checkpoints by epoch or by iteration. Default: True.
save_optimizer (bool) – Whether to save optimizer state_dict in the checkpoint. It is usually used for resuming experiments. Default: True.
best_metric_name (List(str)) – metric name to save best, such as “neck_top1”… Default: [], do not save anything
best_metric_type (List(str)) – metric type to define best, should be “max”, “min” if len(best_metric_type) <= len(best_metric_type), use “max” to append.
- class easycv.hooks.BYOLHook(end_momentum=1.0, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook in BYOL
- This hook including momentum adjustment in BYOL following:
m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, k: current step, K: total steps.
- class easycv.hooks.DINOHook(momentum_teacher=0.996, weight_decay=0.04, weight_decay_end=0.4, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook in DINO
- class easycv.hooks.EMAHook(decay=0.9999, copy_model_attr=())[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook to carry out Exponential Moving Average
- class easycv.hooks.DistEvalHook(dataloader, interval=1, mode='test', initial=False, gpu_collect=False, flush_buffer=True, broadcast_bn_buffer=True, **eval_kwargs)[source]¶
Bases:
easycv.hooks.eval_hook.EvalHook
Distributed evaluation hook.
- dataloader¶
A PyTorch dataloader.
- Type
DataLoader
- interval¶
Evaluation interval (by epochs). Default: 1.
- Type
int
- mode¶
model forward mode
- Type
str
- tmpdir¶
Temporary directory to save the results of all processes. Default: None.
- Type
str | None
- gpu_collect¶
Whether to use gpu or cpu to collect results. Default: False.
- Type
bool
- broadcast_bn_buffer¶
Whether to broadcast the buffer(running_mean and running_var) of rank 0 to other rank before evaluation. Default: True.
- Type
bool
- class easycv.hooks.EvalHook(dataloader, initial=False, interval=1, mode='test', flush_buffer=True, **eval_kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Evaluation hook.
- dataloader¶
A PyTorch dataloader.
- Type
DataLoader
- interval¶
Evaluation interval (by epochs). Default: 1.
- Type
int
- mode¶
model forward mode
- Type
str
- flush_buffer¶
flush log buffer
- Type
bool
- class easycv.hooks.ExportHook(cfg, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', export_after_each_ckpt=False)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
export model when training on pai
- class easycv.hooks.Extractor(dataset, imgs_per_gpu, workers_per_gpu, dist_mode=False)[source]¶
Bases:
object
- class easycv.hooks.OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]¶
Bases:
mmcv.runner.hooks.optimizer.OptimizerHook
- __init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]¶
ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. multiply_key:[str,…] multiply_key[i], name of parameters, which will set different learning rate ratio by multipy_rate multiply_rate:[float,…] multiply_rate[i], different ratio
- class easycv.hooks.OSSSyncHook(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
upload log files and checkpoints to oss when training on pai
- __init__(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]¶
- Parameters
work_dir – work_dir in cfg
oss_work_dir – oss directory where to upload local files in work_dir
interval – upload frequency
ckpt_filename_tmpl – checkpoint filename template
other_file_list – other file need to be upload to oss
iter_interval – upload frequency by iter interval, default to be None, means do it with certain assignment
- class easycv.hooks.TIMEHook(end_momentum=1.0, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
This hook to show time for runner running process
- class easycv.hooks.SWAVHook(gpu_batch_size=32, dump_path='data/', **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook in SWAV
- class easycv.hooks.SyncNormHook(no_aug_epochs=15, interval=1, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Synchronize Norm states after training epoch, currently used in YOLOX.
- Parameters
no_aug_epochs (int) – The number of latter epochs in the end of the training to switch to synchronizing norm interval. Default: 15.
interval (int) – Synchronizing norm interval. Default: 1.
- class easycv.hooks.SyncRandomSizeHook(ratio_range=(14, 26), img_scale=(640, 640), interval=10, device='cuda', **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Change and synchronize the random image size across ranks, currently used in YOLOX.
- Parameters
ratio_range (tuple[int]) – Random ratio range. It will be multiplied by 32, and then change the dataset output image size. Default: (14, 26).
img_scale (tuple[int]) – Size of input image. Default: (640, 640).
interval (int) – The interval of change image size. Default: 10.
device (torch.device | str) – device for returned tensors. Default: ‘cuda’.
- class easycv.hooks.TensorboardLoggerHookV2(log_dir=None, interval=10, ignore_last=True, reset_flag=False, by_epoch=True)[source]¶
Bases:
mmcv.runner.hooks.logger.tensorboard.TensorboardLoggerHook
- class easycv.hooks.WandbLoggerHookV2(init_kwargs=None, interval=10, ignore_last=True, reset_flag=False, commit=True, by_epoch=True, with_step=True)[source]¶
Bases:
mmcv.runner.hooks.logger.wandb.WandbLoggerHook
- class easycv.hooks.YOLOXLrUpdaterHook(num_last_epochs, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook
YOLOX learning rate scheme.
There are two main differences between YOLOXLrUpdaterHook and CosineAnnealingLrUpdaterHook.
- When the current running epoch is greater than
max_epoch-last_epoch, a fixed learning rate will be used
The exp warmup scheme is different with LrUpdaterHook in MMCV
- Parameters
num_last_epochs (int) – The number of epochs with a fixed learning rate before the end of the training.
- class easycv.hooks.YOLOXModeSwitchHook(no_aug_epochs=15, skip_type_keys=('MMMosaic', 'MMRandomAffine', 'MMMixUp'), **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Switch the mode of YOLOX during training.
This hook turns off the mosaic and mixup data augmentation and switches to use L1 loss in bbox_head.
- Parameters
no_aug_epochs – The number of latter epochs in the end of the training to close the data augmentation and switch to L1 loss. Default: 15.
- class easycv.hooks.MixupCollateHook(**kwargs)[source]¶
Bases:
easycv.hooks.collate_hook.BaseCollateHook
Mixedup data batch, should be used after merges a list of samples to form a mini-batch of Tensor(s).
- class easycv.hooks.PreLoggerHook(interval=10, ignore_last=True, reset_flag=False, by_epoch=True)[source]¶
Bases:
mmcv.runner.hooks.logger.base.LoggerHook
- class easycv.hooks.StepFixCosineAnnealingLrUpdaterHook(min_lr=None, min_lr_ratio=None, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook
- class easycv.hooks.CosineAnnealingWarmupByEpochLrUpdaterHook(min_lr=None, min_lr_ratio=None, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook
- class easycv.hooks.ThroughputHook(warmup_iters=0, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Count the throughput per second of all steps in the history. warmup_iters can be set to skip the calculation of the first few steps, if the initialization of the first few steps is slow.
- class easycv.hooks.AMPFP16OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]¶
Bases:
easycv.hooks.optimizer_hook.OptimizerHook
- __init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]¶
ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. loss_scale (float | dict): grade scale config. If loss_scale is a float, static loss scaling will be used with the specified scale.
It can also be a dict containing arguments of GradScalar. For Pytorch >= 1.6, we use official torch.cuda.amp.GradScaler. please refer to: https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler for the parameters.
Submodules¶
easycv.hooks.best_ckpt_saver_hook module¶
- class easycv.hooks.best_ckpt_saver_hook.BestCkptSaverHook(by_epoch=True, save_optimizer=True, best_metric_name=[], best_metric_type=[], **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Save checkpoints periodically.
- Parameters
by_epoch (bool) – Saving checkpoints by epoch or by iteration. Default: True.
save_optimizer (bool) – Whether to save optimizer state_dict in the checkpoint. It is usually used for resuming experiments. Default: True.
best_metric_name (List(str)) – metric name to save best, such as “neck_top1”… Default: [], do not save anything
best_metric_type (List(str)) – metric type to define best, should be “max”, “min” if len(best_metric_type) <= len(best_metric_type), use “max” to append.
easycv.hooks.byol_hook module¶
- class easycv.hooks.byol_hook.BYOLHook(end_momentum=1.0, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook in BYOL
- This hook including momentum adjustment in BYOL following:
m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, k: current step, K: total steps.
easycv.hooks.dino_hook module¶
- easycv.hooks.dino_hook.cosine_scheduler(base_value, final_value, epochs, niter_per_ep, warmup_epochs=0, start_warmup_value=0)[source]¶
- class easycv.hooks.dino_hook.DINOHook(momentum_teacher=0.996, weight_decay=0.04, weight_decay_end=0.4, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook in DINO
easycv.hooks.ema_hook module¶
- class easycv.hooks.ema_hook.ModelEMA(model, decay=0.9999, updates=0)[source]¶
Bases:
object
Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models Keep a moving average of everything in the model state_dict (parameters and buffers). This is intended to allow functionality like https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage A smoothed version of the weights is necessary for some training schemes to perform well. This class is sensitive where it is initialized in the sequence of model init, GPU assignment and distributed training wrappers.
In Yolo5s, ema help increase mAP from 0.27 to 0.353
- class easycv.hooks.ema_hook.EMAHook(decay=0.9999, copy_model_attr=())[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Hook to carry out Exponential Moving Average
easycv.hooks.eval_hook module¶
- class easycv.hooks.eval_hook.EvalHook(dataloader, initial=False, interval=1, mode='test', flush_buffer=True, **eval_kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Evaluation hook.
- dataloader¶
A PyTorch dataloader.
- Type
DataLoader
- interval¶
Evaluation interval (by epochs). Default: 1.
- Type
int
- mode¶
model forward mode
- Type
str
- flush_buffer¶
flush log buffer
- Type
bool
- class easycv.hooks.eval_hook.DistEvalHook(dataloader, interval=1, mode='test', initial=False, gpu_collect=False, flush_buffer=True, broadcast_bn_buffer=True, **eval_kwargs)[source]¶
Bases:
easycv.hooks.eval_hook.EvalHook
Distributed evaluation hook.
- dataloader¶
A PyTorch dataloader.
- Type
DataLoader
- interval¶
Evaluation interval (by epochs). Default: 1.
- Type
int
- mode¶
model forward mode
- Type
str
- tmpdir¶
Temporary directory to save the results of all processes. Default: None.
- Type
str | None
- gpu_collect¶
Whether to use gpu or cpu to collect results. Default: False.
- Type
bool
- broadcast_bn_buffer¶
Whether to broadcast the buffer(running_mean and running_var) of rank 0 to other rank before evaluation. Default: True.
- Type
bool
easycv.hooks.export_hook module¶
- class easycv.hooks.export_hook.ExportHook(cfg, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', export_after_each_ckpt=False)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
export model when training on pai
easycv.hooks.extractor module¶
easycv.hooks.optimizer_hook module¶
- class easycv.hooks.optimizer_hook.OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]¶
Bases:
mmcv.runner.hooks.optimizer.OptimizerHook
- __init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], multiply_key=[], multiply_rate=[])[source]¶
ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. multiply_key:[str,…] multiply_key[i], name of parameters, which will set different learning rate ratio by multipy_rate multiply_rate:[float,…] multiply_rate[i], different ratio
- class easycv.hooks.optimizer_hook.AMPFP16OptimizerHook(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]¶
Bases:
easycv.hooks.optimizer_hook.OptimizerHook
- __init__(update_interval=1, grad_clip=None, coalesce=True, bucket_size_mb=- 1, ignore_key=[], ignore_key_epoch=[], loss_scale={})[source]¶
ignore_key: [str,…], ignore_key[i], name of parameters, which’s gradient will be set to zero before every optimizer step when epoch < ignore_key_epoch[i] ignore_key_epoch: [int,…], epoch < ignore_key_epoch[i], ignore_key[i]’s gradient will be set to zero. loss_scale (float | dict): grade scale config. If loss_scale is a float, static loss scaling will be used with the specified scale.
It can also be a dict containing arguments of GradScalar. For Pytorch >= 1.6, we use official torch.cuda.amp.GradScaler. please refer to: https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler for the parameters.
easycv.hooks.oss_sync_hook module¶
- class easycv.hooks.oss_sync_hook.OSSSyncHook(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
upload log files and checkpoints to oss when training on pai
- __init__(work_dir, oss_work_dir, interval=1, ckpt_filename_tmpl='epoch_{}.pth', export_ckpt_filename_tmpl='epoch_{}_export.pt', other_file_list=[], iter_interval=None)[source]¶
- Parameters
work_dir – work_dir in cfg
oss_work_dir – oss directory where to upload local files in work_dir
interval – upload frequency
ckpt_filename_tmpl – checkpoint filename template
other_file_list – other file need to be upload to oss
iter_interval – upload frequency by iter interval, default to be None, means do it with certain assignment
easycv.hooks.registry module¶
easycv.hooks.show_time_hook module¶
easycv.hooks.swav_hook module¶
easycv.hooks.sync_norm_hook module¶
- class easycv.hooks.sync_norm_hook.SyncNormHook(no_aug_epochs=15, interval=1, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Synchronize Norm states after training epoch, currently used in YOLOX.
- Parameters
no_aug_epochs (int) – The number of latter epochs in the end of the training to switch to synchronizing norm interval. Default: 15.
interval (int) – Synchronizing norm interval. Default: 1.
easycv.hooks.sync_random_size_hook module¶
- class easycv.hooks.sync_random_size_hook.SyncRandomSizeHook(ratio_range=(14, 26), img_scale=(640, 640), interval=10, device='cuda', **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Change and synchronize the random image size across ranks, currently used in YOLOX.
- Parameters
ratio_range (tuple[int]) – Random ratio range. It will be multiplied by 32, and then change the dataset output image size. Default: (14, 26).
img_scale (tuple[int]) – Size of input image. Default: (640, 640).
interval (int) – The interval of change image size. Default: 10.
device (torch.device | str) – device for returned tensors. Default: ‘cuda’.
easycv.hooks.tensorboard module¶
- class easycv.hooks.tensorboard.TensorboardLoggerHookV2(log_dir=None, interval=10, ignore_last=True, reset_flag=False, by_epoch=True)[source]¶
Bases:
mmcv.runner.hooks.logger.tensorboard.TensorboardLoggerHook
easycv.hooks.wandb module¶
- class easycv.hooks.wandb.WandbLoggerHookV2(init_kwargs=None, interval=10, ignore_last=True, reset_flag=False, commit=True, by_epoch=True, with_step=True)[source]¶
Bases:
mmcv.runner.hooks.logger.wandb.WandbLoggerHook
easycv.hooks.yolox_lr_hook module¶
- class easycv.hooks.yolox_lr_hook.YOLOXLrUpdaterHook(num_last_epochs, **kwargs)[source]¶
Bases:
mmcv.runner.hooks.lr_updater.CosineAnnealingLrUpdaterHook
YOLOX learning rate scheme.
There are two main differences between YOLOXLrUpdaterHook and CosineAnnealingLrUpdaterHook.
- When the current running epoch is greater than
max_epoch-last_epoch, a fixed learning rate will be used
The exp warmup scheme is different with LrUpdaterHook in MMCV
- Parameters
num_last_epochs (int) – The number of epochs with a fixed learning rate before the end of the training.
easycv.hooks.yolox_mode_switch_hook module¶
- class easycv.hooks.yolox_mode_switch_hook.YOLOXModeSwitchHook(no_aug_epochs=15, skip_type_keys=('MMMosaic', 'MMRandomAffine', 'MMMixUp'), **kwargs)[source]¶
Bases:
mmcv.runner.hooks.hook.Hook
Switch the mode of YOLOX during training.
This hook turns off the mosaic and mixup data augmentation and switches to use L1 loss in bbox_head.
- Parameters
no_aug_epochs – The number of latter epochs in the end of the training to close the data augmentation and switch to L1 loss. Default: 15.