easycv.models package

Subpackages

Submodules

easycv.models.base module

class easycv.models.base.BaseModel(init_cfg=None)[source]

Bases: torch.nn.modules.module.Module

base class for model.

__init__(init_cfg=None)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

property is_init: bool
init_weights()[source]
abstract forward_train(img: torch.Tensor, **kwargs)Dict[str, torch.Tensor][source]

Abstract interface for model forward in training

Parameters
  • img (Tensor) – image tensor

  • kwargs (keyword arguments) – Specific to concrete implementation.

forward_test(img: torch.Tensor, **kwargs)Dict[str, torch.Tensor][source]

Abstract interface for model forward in testing

Parameters
  • img (Tensor) – image tensor

  • kwargs (keyword arguments) – Specific to concrete implementation.

forward(mode='train', *args, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

train_step(data, optimizer)[source]

The iteration step during training.

This method defines an iteration step during training, except for the back propagation and optimizer updating, which are done in an optimizer hook. Note that in some complicated cases or models, the whole process including back propagation and optimizer updating is also defined in this method, such as GAN.

Parameters
  • data (dict) – The output of dataloader.

  • optimizer (torch.optim.Optimizer | dict) – The optimizer of runner is passed to train_step(). This argument is unused and reserved.

Returns

It should contain at least 3 keys: loss, log_vars, num_samples.

  • loss is a tensor for back propagation, which can be a weighted sum of multiple losses.

  • log_vars contains all the variables to be sent to the logger.

  • num_samples indicates the batch size (when the model is DDP, it means the batch size on each GPU), which is used for averaging the logs.

Return type

dict

val_step(data, optimizer)[source]

The iteration step during validation.

This method shares the same signature as train_step(), but used during val epochs. Note that the evaluation after training epochs is not implemented with this method, but an evaluation hook.

show_result(**kwargs)[source]

Visualize the results.

training: bool

easycv.models.builder module

easycv.models.builder.build(cfg, registry, default_args=None)[source]
easycv.models.builder.build_backbone(cfg)[source]
easycv.models.builder.build_neck(cfg)[source]
easycv.models.builder.build_head(cfg)[source]
easycv.models.builder.build_loss(cfg)[source]
easycv.models.builder.build_model(cfg)[source]
easycv.models.builder.build_voxel_encoder(cfg)[source]

Build voxel encoder.

easycv.models.builder.build_middle_encoder(cfg)[source]

Build middle level encoder.

easycv.models.builder.build_fusion_layer(cfg)[source]

Build fusion layer.

easycv.models.builder.build_transformer(cfg, default_args=None)[source]

Builder for Transformer.

easycv.models.builder.build_positional_encoding(cfg, default_args=None)[source]

Builder for Position Encoding.

easycv.models.builder.build_attention(cfg, default_args=None)[source]

Builder for attention.

easycv.models.builder.build_feedforward_network(cfg, default_args=None)[source]

Builder for feed-forward network (FFN).

easycv.models.builder.build_transformer_layer(cfg, default_args=None)[source]

Builder for transformer layer.

easycv.models.builder.build_transformer_layer_sequence(cfg, default_args=None)[source]

Builder for transformer encoder and transformer decoder.

easycv.models.modelzoo module

easycv.models.registry module