dantro.data_loaders package

This module implements loaders mixin classes for use with the DataManager.

All these mixin classes should follow the following pattern:

class LoadernameLoaderMixin:

    @add_loader(TargetCls=TheTargetContainerClass)
    def _load_loadername(filepath: str, *, TargetCls: type):
        # ...
        return TargetCls(...)

As ensured by the add_loader() decorator (implemented in dantro.data_loaders._tools module), each _load_loadername method gets supplied with the path to a file and the TargetCls argument, which can be called to create an object of the correct type and name.

By default, and to decouple the loader from the container, it should be considered to be a static method; in other words: the first positional argument should ideally not be self! If self is required for some reason, set the omit_self option of the decorator to False, making it a regular (instead of a static) method.

class dantro.data_loaders.AllAvailableLoadersMixin[source]

Bases: dantro.data_loaders.load_text.TextLoaderMixin, dantro.data_loaders.load_yaml.YamlLoaderMixin, dantro.data_loaders.load_pkl.PickleLoaderMixin, dantro.data_loaders.load_hdf5.Hdf5LoaderMixin, dantro.data_loaders.load_xarray.XarrayLoaderMixin, dantro.data_loaders.load_numpy.NumpyLoaderMixin

A mixin bundling all data loaders that are available in dantro.

This is useful for a more convenient import in a downstream DataManager.

See the individual mixins for a more detailed documentation.

_HDF5_DECODE_ATTR_BYTESTRINGS = True
_HDF5_DSET_DEFAULT_CLS

alias of dantro.containers.numeric.NumpyDataContainer

_HDF5_DSET_MAP = None
_HDF5_GROUP_MAP = None
_HDF5_MAP_FROM_ATTR = None
_container_from_h5dataset(h5dset: h5py._hl.dataset.Dataset, target: dantro.base.BaseDataGroup, *, name: str, load_as_proxy: bool, proxy_kwargs: dict, DsetCls: type, map_attr: str, DsetMap: dict, plvl: int, pfstr: str, **_) → dantro.base.BaseDataContainer

Adds a new data container from a h5.Dataset

The group types may be mapped to different dantro types; this is controlled by the extracted HDF5 attribute with the name specified in the _HDF5_MAP_FROM_ATTR class attribute.

Parameters
  • h5dset (h5.Dataset) – The source dataset to load into target as a dantro data container.

  • target (BaseDataGroup) – The target group where the h5dset will be represented in as a new dantro data container.

  • name (str) – the name of the new container

  • load_as_proxy (bool) – Whether to load as Hdf5DataProxy

  • proxy_kwargs (dict) – Upon proxy initialization, unpacked into dantro.proxy.hdf5.Hdf5DataProxy.__init__()

  • DsetCls (BaseDataContainer) – The type that is used to create the dataset-equivalents in target. If mapping is enabled, this serves as the fallback type.

  • map_attr (str) – The HDF5 attribute to inspect in order to determine the name of the mapping

  • DsetMap (dict) – Map of names to BaseDataContainer-derived types; always needed, but may be empty

  • plvl (int) – the verbosity of the progress indicator

  • pfstr (str) – a format string for the progress indicator

_decode_attr_val(attr_val) → str

Wrapper around decode_bytestrings

_evaluate_type_mapping(key: str, *, attrs: dict, tmap: Dict[str, type], fallback: type) → type

Given an attributes dict or group attributes, evaluates which type a target container should use.

_group_from_h5group(h5grp: h5py._hl.group.Group, target: dantro.base.BaseDataGroup, *, name: str, map_attr: str, GroupMap: dict, **_) → dantro.base.BaseDataGroup

Adds a new group from a h5.Group

The group types may be mapped to different dantro types; this is controlled by the extracted HDF5 attribute with the name specified in the _HDF5_MAP_FROM_ATTR class attribute.

Parameters
  • h5grp (h5.Group) – The HDF5 group to create a dantro group for in the target group.

  • target (BaseDataGroup) – The group in which to create a new group that represents h5grp

  • name (str) – the name of the new group

  • GroupMap (dict) – Map of names to BaseDataGroup-derived types; always needed, but may be empty

  • map_attr (str) – The HDF5 attribute to inspect in order to determine the name of the mapping

  • **_ – ignored

_load_hdf5(*args, **kwargs)

Loads the specified hdf5 file into DataGroup- and DataContainer-like objects; this completely recreates the hierarchic structure of the hdf5 file. The data can be loaded into memory completely, or be loaded as a proxy object.

The h5py File and Group objects will be converted to the specified DataGroup-derived objects; the Dataset objects to the specified DataContainer-derived object.

All HDF5 group or dataset attributes are carried over and are accessible under the attrs attribute of the respective dantro objects in the tree.

Parameters
  • filepath (str) – The path to the HDF5 file that is to be loaded

  • TargetCls (type) – The group type this is loaded into

  • load_as_proxy (bool, optional) – if True, the leaf datasets are loaded as dantro.proxy.hdf5.Hdf5DataProxy objects. That way, the data is only loaded into memory when their .data property is accessed the first time, either directly or indirectly.

  • proxy_kwargs (dict, optional) – When loading as proxy, these parameters are unpacked in the __init__ call. For available argument see Hdf5DataProxy.

  • lower_case_keys (bool, optional) – whether to use only lower-case versions of the paths encountered in the HDF5 file.

  • enable_mapping (bool, optional) – If true, will use the class variables _HDF5_GROUP_MAP and _HDF5_DSET_MAP to map groups or datasets to a custom container class during loading. Which attribute to read is determined by the map_from_attr argument (see there).

  • map_from_attr (str, optional) – From which attribute to read the key that is used in the mapping. If nothing is given, the class variable _HDF5_MAP_FROM_ATTR is used.

  • direct_insertion (bool, optional) – If True, some non-crucial checks are skipped during insertion and elements are inserted (more or less) directly into the data tree, thus speeding up the data loading process. This option should only be enabled if data is loaded into a yet unpopulated part of the data tree, otherwise existing elements might be overwritten silently. This option only applies to data groups, not to containers.

  • progress_params (dict, optional) –

    parameters for the progress indicator. Possible keys:

    level (int):

    how verbose to print progress info; possible values are: 0: None, 1: on file level, 2: on dataset level. Note that this option and the progress_indicator of the DataManager are independent from each other.

    fstr:

    format string for progress report, receives the following keys:

    • progress_info (total progress indicator),

    • fname (basename of current hdf5 file),

    • fpath (full path of current hdf5 file),

    • name (current dataset name),

    • path (current path within the hdf5 file)

Returns

The populated root-level group, corresponding to

the base group of the file

Return type

OrderedDataGroup

Raises

ValueError – If enable_mapping, but no map attribute can be determined from the given argument or the class variable _HDF5_MAP_FROM_ATTR

_load_hdf5_as_dask(*args, **kwargs)

This is a shorthand for _load_hdf5() with the load_as_proxy flag set and resolve_as_dask passed as additional arguments to the proxy via proxy_kwargs.

_load_hdf5_proxy(*args, **kwargs)

This is a shorthand for _load_hdf5() with the load_as_proxy flag set.

_load_numpy(*args, **kwargs)

Loads the output of numpy.save back into a NumpyDataContainer.

Parameters
  • filepath (str) – Where the *.npy file is located

  • TargetCls (type) – The class constructor

  • **load_kwargs – Passed on to numpy.load, see there for kwargs

Returns

The reconstructed NumpyDataContainer

Return type

NumpyDataContainer

_load_numpy_binary(*args, **kwargs)

Loads the output of numpy.save back into a NumpyDataContainer.

Parameters
  • filepath (str) – Where the *.npy file is located

  • TargetCls (type) – The class constructor

  • **load_kwargs – Passed on to numpy.load, see there for kwargs

Returns

The reconstructed NumpyDataContainer

Return type

NumpyDataContainer

_load_pickle(*args, **kwargs)

Load a pickled object using dill.load.

Parameters
  • filepath (str) – Where the pickle-dumped file is located

  • TargetCls (type) – The class constructor

  • **pkl_kwargs – Passed on to the load function

Returns

The unpickled object, stored in a dantro container

Return type

ObjectContainer

_load_pkl(*args, **kwargs)

Load a pickled object using dill.load.

Parameters
  • filepath (str) – Where the pickle-dumped file is located

  • TargetCls (type) – The class constructor

  • **pkl_kwargs – Passed on to the load function

Returns

The unpickled object, stored in a dantro container

Return type

ObjectContainer

_load_plain_text(*args, **kwargs)

Loads the content of a plain text file back into a StringContainer.

Parameters
  • filepath (str) – Where the plain text file is located

  • TargetCls (type) – The class constructor

  • **load_kwargs – Passed on to open, see there for possible kwargs

Returns

The reconstructed StringContainer

Return type

StringContainer

_load_text(*args, **kwargs)

Loads the content of a plain text file back into a StringContainer.

Parameters
  • filepath (str) – Where the plain text file is located

  • TargetCls (type) – The class constructor

  • **load_kwargs – Passed on to open, see there for possible kwargs

Returns

The reconstructed StringContainer

Return type

StringContainer

_load_xr_dataarray(*args, **kwargs)

Loads an xr.DataArray from a netcdf file into an XrDataContainer.

Parameters
  • filepath (str) – Where the xarray-dumped netcdf file is located

  • TargetCls (type) – The class constructor

  • load_completely (bool, optional) – If true, will call .load() on the loaded DataArray to load it completely into memory

  • engine (str, optional) – Which engine to use for loading. Refer to the xarray documentation for available engines.

  • **load_kwargs – Passed on to xr.load_dataarray, see there for kwargs

Returns

The reconstructed XrDataContainer

Return type

XrDataContainer

_load_xr_dataset(*args, **kwargs)

Loads an xr.Dataset from a netcdf file into a PassthroughContainer.

Note

As there is no proper equivalent of a dataset in dantro (yet), and unpacking the dataset into a dantro group would reduce functionality, the PassthroughContainer is used here. It should behave almost the same as an xr.Dataset.

Parameters
  • filepath (str) – Where the xarray-dumped netcdf file is located

  • TargetCls (type) – The class constructor

  • load_completely (bool, optional) – If true, will call .load() on the loaded xr.Dataset to load it completely into memory.

  • engine (str, optional) – Which engine to use for loading. Refer to the xarray documentation for available engines.

  • **load_kwargs – Passed on to xr.load_dataset, see there for kwargs

Returns

The reconstructed XrDataset, stored in a

passthrough container.

Return type

PassthroughContainer

_load_yaml(*args, **kwargs)

Load a yaml file from the given path and creates a container to store that data in.

Parameters
  • filepath (str) – Where to load the yaml file from

  • TargetCls (type) – The class constructor

Returns

The loaded yaml file as a container

Return type

MutableMappingContainer

_load_yaml_to_object(*args, **kwargs)

Load a yaml file from the given path and creates a container to store that data in.

Parameters
  • filepath (str) – Where to load the yaml file from

  • TargetCls (type) – The class constructor

Returns

The loaded yaml file as a container

Return type

ObjectContainer

_load_yml(*args, **kwargs)

Load a yaml file from the given path and creates a container to store that data in.

Parameters
  • filepath (str) – Where to load the yaml file from

  • TargetCls (type) – The class constructor

Returns

The loaded yaml file as a container

Return type

MutableMappingContainer

_load_yml_to_object(*args, **kwargs)

Load a yaml file from the given path and creates a container to store that data in.

Parameters
  • filepath (str) – Where to load the yaml file from

  • TargetCls (type) – The class constructor

Returns

The loaded yaml file as a container

Return type

ObjectContainer

_recursively_load_hdf5(src: Union[h5py._hl.group.Group, h5py._hl.files.File], *, target: dantro.base.BaseDataGroup, lower_case_keys: bool, direct_insertion: bool, **kwargs)

Recursively loads the data from a source object (an h5.File or a h5.Group) into the target dantro group.

Parameters
  • src (Union[h5.Group, h5.File]) – The HDF5 source object from which to load the data. This object it iterated over.

  • target (BaseDataGroup) – The target group to populate with the data from src.

  • lower_case_keys (bool) – Whether to make keys lower-case

  • direct_insertion (bool) – Whether to use direct insertion mode on the target group (and all groups below)

  • **kwargs – Passed on to the group and container loader methods, _container_from_h5dataset() and _group_from_h5group().

Raises

NotImplementedError – When encountering objects other than groups or datasets in the HDF5 file