easycv.utils package

Submodules

easycv.utils.alias_multinomial module

class easycv.utils.alias_multinomial.AliasMethod(probs)[source]

Bases: object

https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/

__init__(probs)[source]

Initialize self. See help(type(self)) for accurate signature.

cuda()[source]
draw(N)[source]

Draw N samples from multinomial

easycv.utils.bbox_util module

easycv.utils.checkpoint module

easycv.utils.checkpoint.get_checkpoint(filename)[source]
easycv.utils.checkpoint.load_checkpoint(model, filename, map_location='cpu', strict=False, logger=None, revise_keys=[('^module\\.', '')])[source]

Load checkpoint from a file or URI.

Parameters
  • model (Module) – Module to load checkpoint.

  • filename (str) – Accept local filepath, URL, torchvision://xxx, open-mmlab://xxx. Please refer to docs/model_zoo.md for details.

  • map_location (str) – Same as torch.load().

  • strict (bool) – Whether to allow different params for the model and checkpoint.

  • logger (logging.Logger or None) – The logger for error message.

  • revise_keys (list) – A list of customized keywords to modify the state_dict in checkpoint. Each item is a (pattern, replacement) pair of the regular expression operations. Default: strip the prefix ‘module.’ by [(r’^module.’, ‘’)].

Returns

The loaded checkpoint.

Return type

dict or OrderedDict

easycv.utils.checkpoint.save_checkpoint(model, filename, optimizer=None, meta=None)[source]

Save checkpoint to file.

The checkpoint will have 3 fields: meta, state_dict and optimizer. By default meta will contain version and time info.

Parameters
  • model (Module) – Module whose params are to be saved.

  • filename (str) – Checkpoint filename.

  • optimizer (Optimizer, optional) – Optimizer to be saved.

  • meta (dict, optional) – Metadata to be saved in checkpoint.

easycv.utils.collect module

easycv.utils.collect.nondist_forward_collect(func, data_loader, length)[source]

Forward and collect network outputs.

This function performs forward propagation and collects outputs. It can be used to collect results, features, losses, etc.

Parameters
  • func (function) – The function to process data. The output must be a dictionary of CPU tensors.

  • length (int) – Expected length of output arrays.

Returns

The concatenated outputs.

Return type

results_all (dict(np.ndarray))

easycv.utils.collect.dist_forward_collect(func, data_loader, rank, length, ret_rank=- 1)[source]

Forward and collect network outputs in a distributed manner.

This function performs forward propagation and collects outputs. It can be used to collect results, features, losses, etc.

Parameters
  • func (function) – The function to process data. The output must be a dictionary of CPU tensors.

  • rank (int) – This process id.

  • length (int) – Expected length of output arrays.

  • ret_rank (int) – The process that returns. Other processes will return None.

Returns

The concatenated outputs.

Return type

results_all (dict(np.ndarray))

easycv.utils.collect_env module

easycv.utils.collect_env.collect_env()[source]

easycv.utils.config_tools module

easycv.utils.config_tools.traverse_replace(d, key, value)[source]
class easycv.utils.config_tools.WrapperConfig(cfg_dict=None, cfg_text=None, filename=None)[source]

Bases: mmcv.utils.config.Config

A facility for config and config files. It supports common file formats as configs: python/json/yaml. The interface is the same as a dict object and also allows access config values as attributes. .. rubric:: Example

>>> cfg = Config(dict(a=1, b=dict(b1=[0, 1])))
>>> cfg.a
1
>>> cfg.b
{'b1': [0, 1]}
>>> cfg.b.b1
[0, 1]
>>> cfg = Config.fromfile('tests/data/config/a.py')
>>> cfg.filename
"/home/kchen/projects/mmcv/tests/data/config/a.py"
>>> cfg.item4
'test'
>>> cfg
"Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: "
"{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}"
easycv.utils.config_tools.check_base_cfg_path(base_cfg_name='configs/base.py', father_cfg_name=None, easycv_root=None)[source]

Concatenate paths by parsing path rules. for example(pseudo-code):

1. ‘configs’ in base_cfg_name or ‘benchmarks’ in base_cfg_name: base_cfg_name = easycv_root + base_cfg_name 2. ‘configs’ not in base_cfg_name and ‘benchmarks’ not in base_cfg_name: base_cfg_name = father_cfg_name + base_cfg_name

easycv.utils.config_tools.mmcv_file2dict_raw(filename, first_order_params=None)[source]
easycv.utils.config_tools.mmcv_file2dict_base(ori_filename, first_order_params=None, easycv_root=None)[source]
easycv.utils.config_tools.grouping_params(user_config_params)[source]
easycv.utils.config_tools.adapt_pai_params(cfg_dict)[source]
Parameters

cfg_dict (dict) – All parameters of cfg.

Returns

Add the cfg of export and oss.

Return type

cfg_dict (dict)

easycv.utils.config_tools.init_path(ori_filename)[source]
easycv.utils.config_tools.mmcv_config_fromfile(ori_filename)[source]
easycv.utils.config_tools.pai_config_fromfile(ori_filename, user_config_params=None, model_type=None)[source]
easycv.utils.config_tools.get_config_class_value(cfg_dict, ori_key, dict_mem_helper)[source]
easycv.utils.config_tools.config_dict_edit(ori_cfg_dict, cfg_dict, reg, dict_mem_helper)[source]

edit ${configs.variables} in config dict to solve dependicies in config ori_cfg_dict: to find the true value of ${configs.variables} cfg_dict: for find leafs of dict by recursive reg: Regular expression pattern for find all ${configs.variables} in leafs of dict dict_mem_helper: to store the true value of ${configs.variables} which have been found

easycv.utils.config_tools.rebuild_config(cfg, user_config_params)[source]

# rebuild config by user config params, modify config by user config params & replace ${configs.variables} by true value return: Config

easycv.utils.config_tools.validate_export_config(cfg)[source]

easycv.utils.constant module

easycv.utils.dist_utils module

easycv.utils.dist_utils.is_master()[source]
easycv.utils.dist_utils.local_rank()[source]
easycv.utils.dist_utils.dist_zero_exec(rank=0)[source]
easycv.utils.dist_utils.get_num_gpu_per_node()[source]

get number of gpu per node

easycv.utils.dist_utils.barrier()[source]
easycv.utils.dist_utils.is_parallel(model)[source]
easycv.utils.dist_utils.obj2tensor(pyobj, device='cuda')[source]

Serialize picklable python object to tensor.

easycv.utils.dist_utils.tensor2obj(tensor)[source]

Deserialize tensor to picklable python object.

easycv.utils.dist_utils.all_reduce_dict(py_dict, op='sum', group=None, to_float=True)[source]

Apply all reduce function for python dict object.

The code is modified from https://github.com/Megvii- BaseDetection/YOLOX/blob/main/yolox/utils/allreduce_norm.py.

NOTE: make sure that py_dict in different ranks has the same keys and the values should be in the same shape.

Parameters
  • py_dict (dict) – Dict to be applied all reduce op.

  • op (str) – Operator, could be ‘sum’ or ‘mean’. Default: ‘sum’

  • group (torch.distributed.group, optional) – Distributed group, Default: None.

  • to_float (bool) – Whether to convert all values of dict to float. Default: True.

Returns

reduced python dict object.

Return type

OrderedDict

easycv.utils.dist_utils.get_device()[source]

Returns an available device, cpu, cuda.

easycv.utils.dist_utils.sync_random_seed(seed=None, device='cuda')[source]

Make sure different ranks share the same seed. All workers must call this function, otherwise it will deadlock. This method is generally used in DistributedSampler, because the seed should be identical across all processes in the distributed group. In distributed sampling, different ranks should sample non-overlapped data in the dataset. Therefore, this function is used to make sure that each rank shuffles the data indices in the same order based on the same seed. Then different ranks could use different indices to select non-overlapped data from the same data list. :param seed: The seed. Default to None. :type seed: int, Optional :param device: The device where the seed will be put on.

Default to ‘cuda’.

Returns

Seed to be used.

Return type

int

easycv.utils.dist_utils.is_dist_available()[source]

easycv.utils.eval_utils module

easycv.utils.eval_utils.generate_best_metric_name(evaluate_type, dataset_name, metric_names)[source]

Generate best metric name for different evaluator / different dataset / different metric_names evaluate_type: str dataset_name: None or str metric_names: None str or list[str] or tuple(str)

Returns

list[str]

easycv.utils.flops_counter module

easycv.utils.flops_counter.get_model_info(model, input_size, model_config, logger)[source]

get_model_info, check model parameters and Gflops

easycv.utils.flops_counter.get_model_complexity_info(model, input_res, print_per_layer_stat=True, as_strings=True, input_constructor=None, ost=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]
easycv.utils.flops_counter.flops_to_string(flops, units='GMac', precision=2)[source]
easycv.utils.flops_counter.params_to_string(params_num)[source]

converting number to string

Parameters

params_num (float) – number

Returns str

number

>>> params_to_string(1e9)
'1000.0 M'
>>> params_to_string(2e5)
'200.0 k'
>>> params_to_string(3e-9)
'3e-09'
easycv.utils.flops_counter.print_model_with_flops(model, units='GMac', precision=3, ost=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]
easycv.utils.flops_counter.get_model_parameters_number(model)[source]
easycv.utils.flops_counter.add_flops_counting_methods(net_main_module)[source]
easycv.utils.flops_counter.compute_average_flops_cost(self)[source]

A method that will be available after add_flops_counting_methods() is called on a desired net object. Returns current mean flops consumption per image.

easycv.utils.flops_counter.start_flops_count(self)[source]

A method that will be available after add_flops_counting_methods() is called on a desired net object. Activates the computation of mean flops consumption per image. Call it before you run the network.

easycv.utils.flops_counter.stop_flops_count(self)[source]

A method that will be available after add_flops_counting_methods() is called on a desired net object. Stops computing the mean flops consumption per image. Call whenever you want to pause the computation.

easycv.utils.flops_counter.reset_flops_count(self)[source]

A method that will be available after add_flops_counting_methods() is called on a desired net object. Resets statistics computed so far.

easycv.utils.flops_counter.add_flops_mask(module, mask)[source]
easycv.utils.flops_counter.remove_flops_mask(module)[source]
easycv.utils.flops_counter.is_supported_instance(module)[source]
easycv.utils.flops_counter.empty_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.upsample_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.relu_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.linear_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.pool_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.bn_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.gn_flops_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.deconv_flops_counter_hook(conv_module, input, output)[source]
easycv.utils.flops_counter.conv_flops_counter_hook(conv_module, input, output)[source]
easycv.utils.flops_counter.batch_counter_hook(module, input, output)[source]
easycv.utils.flops_counter.add_batch_counter_variables_or_reset(module)[source]
easycv.utils.flops_counter.add_batch_counter_hook_function(module)[source]
easycv.utils.flops_counter.remove_batch_counter_hook_function(module)[source]
easycv.utils.flops_counter.add_flops_counter_variable_or_reset(module)[source]
easycv.utils.flops_counter.add_flops_counter_hook_function(module)[source]
easycv.utils.flops_counter.remove_flops_counter_hook_function(module)[source]
easycv.utils.flops_counter.add_flops_mask_variable_or_reset(module)[source]

easycv.utils.gather module

easycv.utils.gather.gather_tensors(input_array)[source]
easycv.utils.gather.gather_tensors_batch(input_array, part_size=100, ret_rank=- 1)[source]

easycv.utils.json_utils module

Utilities for dealing with writing json strings.

json_utils wraps json.dump and json.dumps so that they can be used to safely control the precision of floats when writing to json strings or files.

class easycv.utils.json_utils.MyEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]

Bases: json.encoder.JSONEncoder

default(o)[source]

Implement this method in a subclass such that it returns a serializable object for o, or calls the base implementation (to raise a TypeError).

For example, to support arbitrary iterators, you could implement default like this:

def default(self, o):
    try:
        iterable = iter(o)
    except TypeError:
        pass
    else:
        return list(iterable)
    # Let the base class default method raise the TypeError
    return JSONEncoder.default(self, o)
iterencode(o, _one_shot=False)[source]

Encode the given object and yield each string representation as available.

For example:

for chunk in JSONEncoder().iterencode(bigobject):
    mysocket.write(chunk)
easycv.utils.json_utils.dump(obj, fid, float_digits=- 1, **params)[source]

Wrapper of json.dump that allows specifying the float precision used.

Parameters
  • obj – The object to dump.

  • fid – The file id to write to.

  • float_digits – The number of digits of precision when writing floats out.

  • **params – Additional parameters to pass to json.dumps.

easycv.utils.json_utils.dumps(obj, float_digits=- 1, **params)[source]

Wrapper of json.dumps that allows specifying the float precision used.

Parameters
  • obj – The object to dump.

  • float_digits – The number of digits of precision when writing floats out.

  • **params – Additional parameters to pass to json.dumps.

Returns

JSON string representation of obj.

Return type

output

easycv.utils.json_utils.compat_dumps(data, float_digits=- 1)[source]

handle json dumps chinese and numpy data :param data python data structure: :param float_digits: The number of digits of precision when writing floats out.

Returns

json str, in python2 , the str is encoded with utf8

in python3, the str is unicode type(python3 str)

easycv.utils.json_utils.PrettyParams(**params)[source]

Returns parameters for use with Dump and Dumps to output pretty json.

Example usage:

`json_str = json_utils.Dumps(obj, **json_utils.PrettyParams())` ```json_str = json_utils.Dumps(

obj, **json_utils.PrettyParams(allow_nans=False))```

Parameters

**params – Additional params to pass to json.dump or json.dumps.

Returns

Parameters that are compatible with json_utils.Dump and

json_utils.Dumps.

Return type

params

easycv.utils.logger module

easycv.utils.logger.get_root_logger(log_file=None, log_level=20)[source]

Get the root logger.

The logger will be initialized if it has not been initialized. By default a StreamHandler will be added. If log_file is specified, a FileHandler will also be added. The name of the root logger is the top-level package name, e.g., “easycv”.

Parameters
  • log_file (str | None) – The log filename. If specified, a FileHandler will be added to the root logger.

  • log_level (int) – The root logger level. Note that only the process of rank 0 is affected, while other processes will set the level to “Error” and be silent most of the time.

Returns

The root logger.

Return type

logging.Logger

easycv.utils.logger.print_log(msg, logger=None, level=20)[source]

Print a log message.

Parameters
  • msg (str) – The message to be logged.

  • logger (logging.Logger | str | None) – The logger to be used. Some special loggers are: - “root”: the root logger obtained with get_root_logger(). - “silent”: no message will be printed. - None: The print() method will be used to print log messages.

  • level (int) – Logging level. Only available when logger is a Logger object or “root”.

easycv.utils.metric_distance module

easycv.utils.metric_distance.LpDistance(query_emb, ref_emb, p=2)[source]
Input:

query_emb: [n, dims] tensor ref_emb: [m, dims] tensor p : p normalize

Output:

distance_matrix: [n, m] tensor

distance_matrix_i_j = (sigma_k(a_i_k**p - b_j_k**p))**(1/p)

easycv.utils.metric_distance.DotproductSimilarity(query_emb, ref_emb)[source]
easycv.utils.metric_distance.CosineSimilarity(query_emb, ref_emb)[source]
Input:

query_emb: [n, dims] tensor ref_emb: [m, dims] tensor

Output:

distance_matrix: [n, m] tensor

easycv.utils.misc module

easycv.utils.misc.tensor2imgs(tensor, mean=(0, 0, 0), std=(1, 1, 1), to_rgb=True)[source]
easycv.utils.misc.unmap(data, count, inds, fill=0)[source]

Unmap a subset of item (data) back to the original set of items (of size count)

easycv.utils.misc.add_prefix(inputs, prefix)[source]

Add prefix for dict key.

Parameters
  • inputs (dict) – The input dict with str keys.

  • prefix (str) – The prefix add to key name.

Returns

The dict with keys wrapped with prefix.

Return type

dict

easycv.utils.misc.reparameterize_models(model)[source]
reparameterize model for inference, especially forf
  1. rep conv block : merge 3x3 weight 1x1 weights

call module switch_to_deploy recursively

Parameters

model – nn.Module

easycv.utils.misc.deprecated(reason)[source]

This is a decorator which can be used to mark functions as deprecated. It will result in a warning being emitted when the function is used.

easycv.utils.misc.encode_str_to_tensor(obj)[source]
easycv.utils.misc.decode_tensor_to_str(obj)[source]

easycv.utils.preprocess_function module

easycv.utils.preprocess_function.bninceptionPre(image, mean=[104, 117, 128], std=[1, 1, 1])[source]
Parameters
  • image – pytorch Image tensor from PIL (range 0~1), bgr format

  • mean – norm mean

  • std – norm val

Returns

A image norm in 0~255, rgb format

easycv.utils.preprocess_function.randomErasing(image, probability=0.5, sl=0.02, sh=0.2, r1=0.3, mean=[0.4914, 0.4822, 0.4465])[source]
easycv.utils.preprocess_function.solarize(tensor, threshold=0.5, apply_prob=0.2)[source]

tensor : pytorch tensor

easycv.utils.preprocess_function.gaussianBlurDynamic(image, apply_prob=0.5)[source]
easycv.utils.preprocess_function.gaussianBlur(image, kernel_size=22, apply_prob=0.5)[source]
easycv.utils.preprocess_function.randomGrayScale(image, apply_prob=0.2)[source]
easycv.utils.preprocess_function.mixUp(image, alpha=0.2)[source]
easycv.utils.preprocess_function.mixUpCls(data, alpha=0.2)[source]

easycv.utils.profiling module

easycv.utils.profiling.profile_time(trace_name, name, enabled=True, stream=None, end_stream=None)[source]

Print time spent by CPU and GPU.

Useful as a temporary context manager to find sweet spots of code suitable for async implementation.

easycv.utils.profiling.benchmark_torch_function(iters, f, *args)[source]
easycv.utils.profiling.time_synchronized()[source]

easycv.utils.py_util module

easycv.utils.py_util.copy_attr(a, b, include=(), exclude=())[source]
easycv.utils.py_util.get_parent_path(path: str)[source]

get parent path, support oss-style path

easycv.utils.registry module

class easycv.utils.registry.Registry(name)[source]

Bases: object

__init__(name)[source]

Initialize self. See help(type(self)) for accurate signature.

property name
property module_dict
get(key)[source]
register_module(cls=None, force=False)[source]
easycv.utils.registry.build_from_cfg(cfg, registry, default_args=None)[source]

Build a module from config dict.

Parameters
  • cfg (dict) – Config dict. It should at least contain the key “type”.

  • registry (Registry) – The registry to search the type from.

  • default_args (dict, optional) – Default initialization arguments.

Returns

The constructed object.

Return type

obj

easycv.utils.test_util module

Contains functions which are convenient for unit testing.

easycv.utils.test_util.get_tmp_dir()[source]
easycv.utils.test_util.clear_all_tmp_dirs()[source]
easycv.utils.test_util.replace_data_for_test(cfg)[source]

replace real data with test data

Parameters

cfg – Config object

easycv.utils.test_util.RunAsSubprocess(f)[source]
easycv.utils.test_util.clean_up(test_dir)[source]
easycv.utils.test_util.run_in_subprocess(cmd)[source]
easycv.utils.test_util.dist_exec_wrapper(cmd, nproc_per_node, node_rank=0, nnodes=1, port='29527', addr='127.0.0.1', python_path=None)[source]

donot forget init dist in your function or script of cmd `python from mmcv.runner import init_dist init_dist(launcher='pytorch') `

easycv.utils.test_util.is_port_used(port, host='127.0.0.1')[source]
easycv.utils.test_util.get_random_port()[source]
easycv.utils.test_util.pseudo_dist_init()[source]
easycv.utils.test_util.computeStats(backend, timings, batch_size=1, model_name='default')[source]

compute the statistical metric of time and speed

easycv.utils.test_util.benchmark(predictor, input_data_list, backend='BACKEND', batch_size=1, model_name='default', num=200)[source]

evaluate the time and speed of different models

class easycv.utils.test_util.DistributedTestCase(methodName='runTest')[source]

Bases: unittest.case.TestCase

Distributed TestCase for test function with distributed mode. .. rubric:: Examples

import torch from mmcv.runner import init_dist from torch import distributed as dist

def _test_func(*args, **kwargs):

init_dist(launcher=’pytorch’) rank = dist.get_rank() if rank == 0:

value = torch.tensor(1.0).cuda()

else:

value = torch.tensor(2.0).cuda()

dist.all_reduce(value) return value.cpu().numpy()

class DistTest(DistributedTestCase):
def test_function_dist(self):

args = () # args should be python builtin type kwargs = {} # kwargs should be python builtin type self.start_with_torch(

_test_func, num_gpus=2, assert_callback=lambda x: self.assertEqual(x, 3.0), *args, **kwargs,

)

start_with_torch(func, num_gpus, assert_callback=None, save_all_ranks=False, *args, **kwargs)[source]
start_with_torchacc(func, num_gpus, assert_callback=None, save_all_ranks=False, *args, **kwargs)[source]
clean_tmp(tmp_file_list)[source]

easycv.utils.user_config_params_utils module

easycv.utils.user_config_params_utils.check_value_type(replacement, original)[source]

convert replacement’s type to original’s type, support converting str to int or float or list or tuple