Module

Base class for deriving trainable modules

class ivy.stateful.module.Module(*, device=None, v=None, build_mode='on_init', compile_on_next_step=False, store_vars=True, stateful=None, arg_stateful_idxs=None, kwarg_stateful_idxs=None, fallback_to_non_compiled=False, with_partial_v=False, devices=None, dtype=None)[source]

Bases: ABC

Module is a base class for deriving trainable modules.

__call__(*args, v=None, with_grads=None, stateful=None, arg_stateful_idxs=None, kwarg_stateful_idxs=None, track_submod_rets=False, submod_depth=None, submods_to_track=None, track_submod_call_order=False, expected_submod_rets=None, **kwargs)[source]

Forward an input through current module.

Parameters
  • v – If given, use this container as internal varibles temporarily. Default is None.

  • with_grads – If True, forward this pass with gradients.

  • track_submod_rets – If True, will track the returns of submodules.

  • submod_depth – The depth of tracked submodules.

  • submods_to_track – If given, will only track submodules in submods_to_track.

  • track_submod_call_order – If True, will track the call order of submodules.

  • expected_submod_rets – If given, will raise exception if submodule returns are different from expected returns.

Returns

ret

__init__(*, device=None, v=None, build_mode='on_init', compile_on_next_step=False, store_vars=True, stateful=None, arg_stateful_idxs=None, kwarg_stateful_idxs=None, fallback_to_non_compiled=False, with_partial_v=False, devices=None, dtype=None)[source]

Initialize Ivy layer, which is a stateful object consisting of trainable variables.

Parameters
  • device – device on which to create the module’s variables ‘cuda:0’, ‘cuda:1’, ‘cpu’ etc. (Default value = None)

  • v – Ivy container of trainable variables. Created internally by default.

  • build_mode – How the Module is built, either on initialization (now), explicitly by the user by calling build(), or the first time the __call__ method is run. Default is on initialization.

  • compile_on_next_step – Whether to compile the network on the next forward pass. Default is False.

  • store_vars – Whether or not to store the variables created. Default is True.

  • stateful – The constant id stateful items to track as part of the forward pass. Used when graph compiling, default is None.

  • arg_stateful_idxs – The nested argument indices of stateful items to track as part of the forward pass. Used when graph compiling, default is None.

  • kwarg_stateful_idxs – The nested keyword argument indices of stateful items to track as part of the forward pass. Used when graph compiling, default is None.

  • fallback_to_non_compiled – Whether to fall back to non-compiled forward call in the case that an error is raised during the compiled forward pass. Default is True.

  • with_partial_v – Whether to allow partial specification of variables. Default is False.

  • devices – devices on which to distribute the module’s variables ‘cuda:0’, ‘cuda:1’, ‘cpu’ etc. (Default value = None)

build(*args, from_call=False, device=None, dtype=None, **kwargs)[source]

Build the internal layers and variables for this module.

Parameters
  • from_call – If True, denote that this build is triggered by calling. Otherwise, triggered by initializing the module. Default is False.

  • device – The device we want to build module on. None for default device. Default is None.

  • dtype – The data type for building the module. Default is None.

Returns

ret – True for successfully built a module.

property build_mode
property built
check_submod_rets()[source]

Compares the submodule returns with the expected submodule returns passed during call

Returns

ret – True if the top module has expected_submod_rets.

get_mod_key(*, top_mod=None)[source]

Get the key of current module.

Parameters

top_mod – Explicit indicate the top module. None for the top module of current module. Default is None.

Returns

A string of current module key.

mod_depth()[source]

Return the depth of the current module.

Returns

ret – The depth of the module in the network. Return 0 for root module.

mod_height()[source]

Return the height of the current module.

Returns

ret – The height of the network. Return 0 for leaf module.

mod_with_top_mod_key_chain(*, depth=None, flatten_key_chain=False)[source]

(TODO)

Parameters
  • depth

  • flatten_key_chain – If set True, will return return a flat (depth-1) container, with all nested key-chains flattened. Default is False.

save_weights(weights_path, /)[source]

Save the weights on the Module.

Parameters

weights_path – The hdf5 file for saving the weights.

Returns

None

show_mod_in_top_mod(*, upper_depth=None, lower_depth=None, flatten_key_chains=False)[source]

Show lower submodules in the top module. upper_depth and lower_depth are for controlling the coverage of upper and lower modules. Will give prompt if no top module found.

Parameters
  • upper_depth – How many modules it tracks up as upper module. None for current module. Default is None. Will be truncated to mod_depth.

  • lower_depth – How many modules it tracks down. None for current module. Default is None. Will be truncated to mod_height.

  • flatten_key_chains – If set True, will return a flat (depth-1) container, which all nested key-chains flattened. Default is False.

show_structure()[source]

Prints the structure of the layer network.

Returns

this_repr – String of the structure of the module.

show_v_in_top_v(*, depth=None)[source]

Show sub containers from the perspective of value of top layer. Will give prompt if either of v and top_v is not initialized.

Parameters

depth – The number of modules we want to step in. None for the value of current module. Default is None.

sub_mods(*, show_v=True, depth=None, flatten_key_chains=False)[source]

Return a container composing of all submodules.

Parameters
  • show_v – If set True, will return values of all submodule variables. Default is True.

  • depth – How many layers we step in before beginning enumerating submodules. None for current layer. Default is None.

  • flatten_key_chains – If set True, will return a flat (depth-1) container, which all nested key-chains flattened. Default is False.

Returns

ret – A container composing of all submodules.

to_torch_module()[source]

Convert a trainable ivy.Module instance to an instance of a trainable torch module.

Parameters

self – trainable ivy.Module instance

Returns

ret – The new trainable torch module instance.

track_submod_call_order()[source]

Tracks the order in which the submodules are called.

Returns

ret – True if the current module allows call order tracking.

track_submod_rets()[source]

Tracks the returns of the submodules if track_submod_returns argument is set to True during call

Returns

ret – True if the current module gets tracked in the computation graph.

v_with_top_v_key_chains(*, depth=None, flatten_key_chains=False)[source]

Show current layer from the perspective of value of top layer. Will give prompt if either of v and top_v is not initialized.

Parameters
  • depth – The number of modules we want to step in. None for the value of current module. Default is None.

  • flatten_key_chains – If set True, will return a flat (depth-1) container, which all nested key-chains flattened. Default is False.

class ivy.stateful.module.NewTorchModule(ivy_module, *args, **kwargs)[source]

Bases: Module

__init__(ivy_module, *args, **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(*a, **kw)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

This should have hopefully given you an overview of the module submodule,If you have any questions, please feel free to reach out on our discord in the module channel or in the module forum!