codes.surrogates.FCNN package#

Submodules#

codes.surrogates.FCNN.fcnn module#

class codes.surrogates.FCNN.fcnn.FCFlatBatchIterable(inputs_t, targets_t, batch_size, shuffle)#

Bases: IterableDataset

class codes.surrogates.FCNN.fcnn.FCPrebatchedDataset(inputs_batches, targets_batches)#

Bases: Dataset

Dataset for pre-batched data specifically for the FullyConnected model.

Parameters:
  • inputs_batches (list[Tensor]) – List of precomputed input batches.

  • targets_batches (list[Tensor]) – List of precomputed target batches.

class codes.surrogates.FCNN.fcnn.FullyConnected(device=None, n_quantities=29, n_timesteps=100, n_parameters=0, training_id=None, config=None)#

Bases: AbstractSurrogateModel

create_dataloader(data, timesteps, batch_size, shuffle, dataset_params, num_workers=0, pin_memory=True)#

Build CPU tensors once and yield shuffled batches each epoch. data: (n_samples, n_timesteps, n_quantities) inputs = [initial_state, time, (params?)] targets = state_at_time

epoch(data_loader, criterion, optimizer)#
Return type:

float

fit(train_loader, test_loader, epochs, position=0, description='Training FullyConnected', multi_objective=False)#

Train the FullyConnected model.

Parameters:
  • train_loader (DataLoader) – The DataLoader object containing the training data.

  • test_loader (DataLoader) – The DataLoader object containing the test data.

  • epochs (int, optional) – The number of epochs to train the model.

  • position (int) – The position of the progress bar.

  • description (str) – The description for the progress bar.

  • multi_objective (bool) – Whether multi-objective optimization is used. If True, trial.report is not used (not supported by Optuna).

Return type:

None

Returns:

None. The training loss, test loss, and MAE are stored in the model.

forward(inputs)#

Forward pass for the FullyConnected model.

Parameters:

inputs (tuple[torch.Tensor, torch.Tensor]) – (x, targets) - ‘targets’ is included for a consistent interface

Return type:

Tensor

Returns:

(outputs, targets)

prepare_data(dataset_train, dataset_test, dataset_val, timesteps, batch_size, shuffle=True, dummy_timesteps=True, dataset_train_params=None, dataset_test_params=None, dataset_val_params=None)#

Prepare the data for training, testing, and validation. This method should return the DataLoader objects for the training, testing, and validation data.

Parameters:
  • dataset_train (np.ndarray) – The training dataset.

  • dataset_test (np.ndarray) – The testing dataset.

  • dataset_val (np.ndarray) – The validation dataset.

  • timesteps (np.ndarray) – The timesteps.

  • batch_size (int) – The batch size.

  • shuffle (bool) – Whether to shuffle the data.

  • dummy_timesteps (bool) – Whether to use dummy timesteps. Defaults to True.

Returns:

The DataLoader objects for the

training, testing, and validation data.

Return type:

tuple[DataLoader, DataLoader, DataLoader]

class codes.surrogates.FCNN.fcnn.FullyConnectedNet(input_size, hidden_size, output_size, num_hidden_layers, activation=ReLU())#

Bases: Module

forward(inputs)#

Define the computation performed at every call.

Should be overridden by all subclasses. :rtype: Tensor

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

codes.surrogates.FCNN.fcnn.fc_collate_fn(batch)#

Custom collate function to ensure tensors are returned in the correct shape.

Parameters:

batch (list[tuple[torch.Tensor, torch.Tensor]]) – List of precomputed batches. Each tuple contains an input_batch with shape [batch_size, n_quantities + 1] and target_batch with shape [batch_size, n_quantities].

Returns:

Inputs and targets with the same shapes as described above.

Return type:

tuple[torch.Tensor, torch.Tensor]

codes.surrogates.FCNN.fcnn_config module#

class codes.surrogates.FCNN.fcnn_config.FCNNBaseConfig(learning_rate=0.0003, regularization_factor=0.0, optimizer='adamw', momentum=0.0, scheduler='cosine', poly_power=0.9, eta_min=0.1, activation=None, loss_function=None, beta=0.0, hidden_size=150, num_hidden_layers=5)#

Bases: AbstractSurrogateBaseConfig

Configuration for the Fully Connected Neural Network (FCNN) surrogate model.

This config defines the structure and training settings for a standard MLP-style surrogate model with fully connected layers.

Attributes:
  • hidden_size (int) – Number of neurons per hidden layer.

  • num_hidden_layers (int) – Number of hidden layers in the network.

hidden_size: int = 150#
num_hidden_layers: int = 5#

Module contents#