easycv.runner package

Submodules

easycv.runner.ev_runner module

class easycv.runner.ev_runner.EVRunner(model, batch_processor=None, optimizer=None, work_dir=None, logger=None, meta=None, fp16_enable=False)[source]

Bases: mmcv.runner.epoch_based_runner.EpochBasedRunner

__init__(model, batch_processor=None, optimizer=None, work_dir=None, logger=None, meta=None, fp16_enable=False)[source]

Epoch Runner for easycv, add support for oss IO and file sync.

Parameters
  • model (torch.nn.Module) – The model to be run.

  • batch_processor (callable) – A callable method that process a data batch. The interface of this method should be batch_processor(model, data, train_mode) -> dict

  • optimizer (dict or torch.optim.Optimizer) – It can be either an optimizer (in most cases) or a dict of optimizers (in models that requires more than one optimizer, e.g., GAN).

  • work_dir (str, optional) – The working directory to save checkpoints and logs. Defaults to None.

  • logger (logging.Logger) – Logger used during training. Defaults to None. (The default value is just for backward compatibility)

  • meta (dict | None) – A dict records some import information such as environment info and seed, which will be logged in logger hook. Defaults to None.

  • fp16_enable (bool) – if use fp16

run_iter(data_batch, train_mode, **kwargs)[source]

process for each iteration.

Parameters
  • data_batch – Batch of dict of data.

  • train_model (bool) – If set True, run training step else validation step.

train(data_loader, **kwargs)[source]

Training process for one epoch which will iterate through all training data and call hooks at different stages.

Parameters

data_loader – data loader object for training

val(data_loader, **kwargs)[source]

Validation step which Deprecated, using evaluation hook instead.

save_checkpoint(out_dir, filename_tmpl='epoch_{}.pth', save_optimizer=True, meta=None, create_symlink=True)[source]

Save checkpoint to file.

Parameters
  • out_dir – Directory where checkpoint files are to be saved.

  • filename_tmpl (str, optional) – Checkpoint filename pattern.

  • save_optimizer (bool, optional) – save optimizer state.

  • meta (dict, optional) – Metadata to be saved in checkpoint.

current_lr()[source]

Get current learning rates.

Returns

Current learning rates of all

param groups. If the runner has a dict of optimizers, this method will return a dict.

Return type

list[float] | dict[str, list[float]]

load_checkpoint(filename, map_location=device(type='cpu'), strict=False, logger=None)[source]

Load checkpoint from a file or URL.

Parameters
  • filename (str) – Accept local filepath, URL, torchvision://xxx, open-mmlab://xxx, oss://xxx. Please refer to docs/source/model_zoo.md for details.

  • map_location (str) – Same as torch.load().

  • strict (bool) – Whether to allow different params for the model and checkpoint.

  • logger (logging.Logger or None) – The logger for error message.

Returns

The loaded checkpoint.

Return type

dict or OrderedDict

resume(checkpoint, resume_optimizer=True, map_location='default')[source]

Resume state dict from checkpoint.

Parameters
  • checkpoint – Checkpoint path

  • resume_optimizer – Whether to resume optimizer state

  • map_location (str) – Same as torch.load().