codes.surrogates.LatentNeuralODE package#

Submodules#

codes.surrogates.LatentNeuralODE.latent_neural_ode module#

class codes.surrogates.LatentNeuralODE.latent_neural_ode.Decoder(out_features, latent_features=5, coder_layers=3, coder_width=32, activation=ReLU(), dtype=torch.float64)#

Bases: Module

Fully connected decoder that maps the latent space back to the output.

forward(x)#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class codes.surrogates.LatentNeuralODE.latent_neural_ode.Encoder(in_features, latent_features=5, coder_layers=3, coder_width=32, activation=ReLU(), dtype=torch.float64)#

Bases: Module

Fully connected encoder that maps input features to a latent space.

forward(x)#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class codes.surrogates.LatentNeuralODE.latent_neural_ode.LatentNeuralODE(device=None, n_quantities=29, n_timesteps=100, n_parameters=0, training_id=None, config=None, dtype=torch.float32)#

Bases: AbstractSurrogateModel

LatentNeuralODE represents a latent neural ODE model. It includes an encoder, decoder, and neural ODE. Fixed parameters can be injected either into the encoder or later into the ODE network, controlled by config.encode_params.

Parameters:
  • device (str | None) – Device for training (e.g. ‘cpu’, ‘cuda:0’).

  • n_quantities (int) – Number of quantities.

  • n_timesteps (int) – Number of timesteps.

  • n_parameters (int) – Number of fixed parameters (default 0).

  • config (dict | None) – Configuration for the model.

create_dataloader(data, timesteps, batch_size, shuffle, dataset_params, num_workers=0, pin_memory=True)#
fit(train_loader, test_loader, epochs, position=0, description='Training LatentNeuralODE', multi_objective=False)#

Train the LatentNeuralODE model.

Parameters:
  • train_loader (DataLoader) – The data loader for the training data.

  • test_loader (DataLoader) – The data loader for the test data.

  • epochs (int | None) – The number of epochs to train the model. If None, uses the value from the config.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • multi_objective (bool) – Whether multi-objective optimization is used. If True, trial.report is not used (not supported by Optuna).

Return type:

None

fit_profile(train_loader, test_loader, epochs, position=0, description='Training LatentNeuralODE', multi_objective=False)#
Return type:

None

forward(inputs)#

Forward pass through the model. Expects inputs to be either (data, timesteps) or (data, timesteps, params).

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size=128, shuffle=True, dummy_timesteps=True, dataset_train_params=None, dataset_test_params=None, dataset_val_params=None)#

Prepare the data for training, testing, and validation. This method should return the DataLoader objects for the training, testing, and validation data.

Parameters:
  • dataset_train (np.ndarray) – The training dataset.

  • dataset_test (np.ndarray) – The testing dataset.

  • dataset_val (np.ndarray) – The validation dataset.

  • timesteps (np.ndarray) – The timesteps.

  • batch_size (int) – The batch size.

  • shuffle (bool) – Whether to shuffle the data.

  • dummy_timesteps (bool) – Whether to use dummy timesteps. Defaults to True.

Returns:

The DataLoader objects for the

training, testing, and validation data.

Return type:

tuple[DataLoader, DataLoader, DataLoader]

class codes.surrogates.LatentNeuralODE.latent_neural_ode.ModelWrapper(config, n_quantities, n_parameters=0, n_timesteps=101, dtype=torch.float64, use_pid=False, adjoint_type='autodiff')#

Bases: Module

Wraps the encoder, decoder, and neural ODE in three distinct modes:

  1. No parameters (n_parameters=0) - Encoder: input = state_dim - ODE: latent_dim -> latent_dim (the solver always evolves the latent state) - Decoder: latent_dim -> output dimensions

  2. encode_params=True - Encoder: input = state_dim + param_dim - ODE: latent_dim -> latent_dim - Decoder: latent_dim -> output dimensions

  3. encode_params=False - Encoder: input = state_dim - Base ODE: (latent_dim + param_dim) -> latent_dim - Wrapped in ODEWithParams so that solver state = latent_dim - Decoder: latent_dim -> output dimensions

first_derivative(x)#
forward(x0, t_range, params=None)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

identity_loss(x_true, params=None)#

Calculate the identity loss (Encoder -> Decoder) on the initial state x0.

Parameters:
  • x_true (Tensor) – The full trajectory (batch, timesteps, features).

  • params (Tensor | None) – Fixed parameters (batch, n_parameters).

Returns:

The identity loss on x0.

Return type:

Tensor

second_derivative(x)#
total_loss(x_true, x_pred, params=None, criterion=MSELoss())#

Total loss: weighted sum of trajectory reconstruction, identity, first derivative, and second derivative losses. All terms remain in the computation graph.

class codes.surrogates.LatentNeuralODE.latent_neural_ode.ODE(input_shape, output_shape, activation, ode_layers, ode_width, tanh_reg, dtype=torch.float64)#

Bases: Module

Neural ODE module defining the function for latent dynamics.

forward(t, x)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class codes.surrogates.LatentNeuralODE.latent_neural_ode.ODEWithParams(base_ode, n_parameters, latent_dim)#

Bases: Module

Wraps a base ODE module so that parameters are injected as a constant. The solver sees only the latent state y (dim = latent_dim), but ODEWithParams.forward will concatenate y with p to compute dy/dt.

forward(t, y)#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

set_params(params)#
class codes.surrogates.LatentNeuralODE.latent_neural_ode.OldDecoder(out_features, latent_features=5, layers_factor=8, activation=ReLU())#

Bases: Module

Old decoder corresponding to the old encoder.

forward(x)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class codes.surrogates.LatentNeuralODE.latent_neural_ode.OldEncoder(in_features, latent_features=5, layers_factor=8, activation=ReLU())#

Bases: Module

Old encoder using a fixed 4-2-1 structure.

forward(x)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

codes.surrogates.LatentNeuralODE.latent_neural_ode_config module#

class codes.surrogates.LatentNeuralODE.latent_neural_ode_config.LatentNeuralODEBaseConfig(learning_rate=0.0003, regularization_factor=0.0, optimizer='adamw', momentum=0.0, scheduler='cosine', poly_power=0.9, eta_min=0.1, activation=None, loss_function=None, beta=0.0, model_version='v2', latent_features=5, layers_factor=8, coder_layers=3, coder_width=64, ode_layers=4, ode_width=64, ode_tanh_reg=True, rtol=1e-06, atol=1e-06, encode_params=False)#

Bases: AbstractSurrogateBaseConfig

Configuration for the LatentNeuralODE surrogate model.

This dataclass defines all hyperparameters required to configure the architecture and training of a latent neural ODE. It supports both fixed and flexible encoder/decoder structures depending on the model_version flag.

Attributes:
  • model_version (str) – Indicates model architecture style. - “v1”: Fixed structure (e.g., 4-2-1 layout scaled by factor). - “v2”: Fully connected architecture with flexible depth/width.

  • latent_features (int) – Size of the latent space (z-dimension).

  • layers_factor (int) – Scaling factor for the number of neurons in the encoder/decoder. - Used in “v1” to determine layer widths based on coder_hidden.

  • coder_layers (int) – Number of hidden layers in both encoder and decoder (used in v2).

  • coder_width (int) – Number of neurons per hidden layer in encoder/decoder (used in v2).

  • ode_layers (int) – Number of hidden layers in the ODE module.

  • ode_width (int) – Number of neurons in each hidden layer of the ODE module.

  • ode_tanh_reg (bool) – Whether to apply tanh regularization in the ODE output.

  • rtol (float) – Relative tolerance for the ODE solver.

  • atol (float) – Absolute tolerance for the ODE solver.

  • encode_params (bool) – Whether to encode parameters in the encoder. - If False, parameters are passed after the encoder, as additional inputs to the ODE network.

atol: float = 1e-06#
coder_layers: int = 3#
coder_width: int = 64#
encode_params: bool = False#
latent_features: int = 5#
layers_factor: int = 8#
model_version: str = 'v2'#
ode_layers: int = 4#
ode_tanh_reg: bool = True#
ode_width: int = 64#
rtol: float = 1e-06#

codes.surrogates.LatentNeuralODE.utilities module#

class codes.surrogates.LatentNeuralODE.utilities.ChemDataset(raw_data, timesteps, device, parameters)#

Bases: Dataset

Dataset class for the latent neural ODE model. Returns each sample along with its timesteps and (optionally) fixed parameters.

class codes.surrogates.LatentNeuralODE.utilities.FlatSeqBatchIterable(data_t, timesteps_t, params_t, batch_size, shuffle)#

Bases: IterableDataset

Module contents#