- class CacheConfig(cache_type: ~quaterion.train.cache.cache_config.CacheType | None = CacheType.AUTO, mapping: ~typing.Dict[str, ~quaterion.train.cache.cache_config.CacheType] = <factory>, key_extractors: KeyExtractorType | ~typing.Dict[str, KeyExtractorType] = <factory>, batch_size: int | None = 32, num_workers: int | None = None, save_dir: str | None = None)¶
Determine cache settings.
This class should be passed to
- batch_size: int | None = 32¶
Batch size to be used in CacheDataLoader during caching process. It does not affect others training stages.
- key_extractors: KeyExtractorType | Dict[str, KeyExtractorType]¶
Mapping of encoders to key extractor functions required to cache non-hashable objects.
- num_workers: int | None = None¶
Num of workers to be used in CacheDataLoader during caching process. It does not affect others training stages.
- save_dir: str | None = None¶
If provided, cache fill be saved to the given directory and re-used between launches
- class CacheType(value)¶
Available tensor devices to be used for caching.
- AUTO = 'auto'¶
Use CUDA if it is available, else use CPU.
- CPU = 'cpu'¶
Tensors device is CPU.
- GPU = 'gpu'¶
Tensors device is GPU.
- NONE = 'none'¶
Type of function to extract hash value from the input object. Required if there is no other way to distinguish values for caching