smash.Model.set_control_optimize#

Model.set_control_optimize(control_vector, mapping='uniform', optimizer=None, optimize_options=None)[source]#

Retrieve the Model parameters/states from the optimization control vector.

Parameters:
control_vectornumpy.ndarray

A 1D array representing the control values, ideally obtained from the optimize (or Model.optimize) method.

mappingstr, default ‘uniform’

Type of mapping. Should be one of

  • 'uniform'

  • 'distributed'

  • 'multi-linear'

  • 'multi-power'

  • 'ann'

Hint

See the Mapping section.

optimizerstr or None, default None

Name of optimizer. Should be one of

  • 'sbs' (only for 'uniform' mapping)

  • 'nelder-mead' (only for 'uniform' mapping)

  • 'powell' (only for 'uniform' mapping)

  • 'lbfgsb' (for all mappings except 'ann')

  • 'adam' (for all mappings)

  • 'adagrad' (for all mappings)

  • 'rmsprop' (for all mappings)

  • 'sgd' (for all mappings)

Note

If not given, a default optimizer will be set as follows:

  • 'sbs' for mapping = 'uniform'

  • 'lbfgsb' for mapping = 'distributed', 'multi-linear', 'multi-power'

  • 'adam' for mapping = 'ann'

Hint

See the Optimization Algorithms section.

optimize_optionsdict[str, Any] or None, default None

Dictionary containing optimization options for fine-tuning the optimization process. See default_optimize_options to retrieve the default optimize options based on the mapping and optimizer.

parametersstr, list[str, …] or None, default None

Name of parameters to optimize. Should be one or a sequence of any key of:

>>> optimize_options = {
    "parameters": "cp",
}
>>> optimize_options = {
    "parameters": ["cp", "ct", "kexc", "llr"],
}

Note

If not given, all parameters in Model.rr_parameters, Model.nn_parameters (if used)

will be optimized.

boundsdict[str, tuple[float, float]] or None, default None

Bounds on optimized parameters. A dictionary where the keys represent parameter names, and the values are pairs of (min, max) values (i.e., a list or tuple) with min lower than max. The keys must be included in parameters.

>>> optimize_options = {
    "bounds": {
        "cp": (1, 2000),
        "ct": (1, 1000),
        "kexc": (-10, 5)
        "llr": (1, 1000)
    },
}

Note

If not given, default bounds will be applied to each parameter. See Model.get_rr_parameters_bounds, Model.get_rr_initial_states_bounds

control_tfmstr or None, default None

Transformation method applied to bounded parameters of the control vector. Should be one of

  • 'keep'

  • 'normalize'

  • 'sbs' ('sbs' optimizer only)

Note

If not given, the default control vector transformation is control_tfm = 'normalize' except for the 'sbs' optimizer, where control_tfm = 'sbs'. This options is not used when mapping is 'ann'.

descriptordict[str, list[str, …]] or None, default None

Descriptors linked to optimized parameters. A dictionary where the keys represent parameter names, and the values are list of descriptor names. The keys must be included in parameters.

>>> optimize_options = {
    "descriptor": {
        "cp": ["slope", "dd"],
        "ct": ["slope"],
        "kexc": ["slope", "dd"],
        "llr": ["dd"],
    },
}

Note

If not given, all descriptors will be used for each parameter. This option is only be used when mapping is 'multi-linear' or 'multi-power'. In case of 'ann', all descriptors will be used.

netNet or None, default None

The regionalization neural network used to learn the descriptors-to-parameters mapping.

Note

If not given, a default neural network will be used. This option is only used when mapping is 'ann'. See Net to learn how to create a customized neural network for training.

learning_ratefloat or None, default None

The learning rate used for updating trainable parameters when using adaptive optimizers (i.e., 'adam', 'adagrad', 'rmsprop', 'sgd').

Note

If not given, a default learning rate for each optimizer will be used.

random_stateint or None, default None

A random seed used to initialize neural network parameters.

Note

If not given, the neural network parameters will be initialized with a random seed. This options is only used when mapping is 'ann', and the weights and biases of net are not yet initialized.

termination_critdict[str, Any] or None, default None

Termination criteria. The elements are:

  • 'maxiter': The maximum number of iterations.

  • 'xatol': Absolute error in solution parameters between iterations that is acceptable for convergence. Only used when optimizer is 'nelder-mead'.

  • 'fatol': Absolute error in cost function value between iterations that is acceptable for convergence. Only used when optimizer is 'nelder-mead'.

  • 'factr': An additional termination criterion based on cost values. Only used when optimizer is 'lbfgsb'.

  • 'pgtol': An additional termination criterion based on the projected gradient of the cost function. Only used when optimizer is 'lbfgsb'.

  • 'early_stopping': A positive number to stop training when the cost function does not decrease below the current optimal value for early_stopping consecutive iterations. When set to zero, early stopping is disabled, and the training continues for the full number of iterations. Only used for adaptive optimizers (i.e., 'adam', 'adagrad', 'rmsprop', 'sgd').

>>> optimize_options = {
    "termination_crit": {
        "maxiter": 10,
        "factr": 1e6,
    },
}
>>> optimize_options = {
    "termination_crit": {
        "maxiter": 200,
        "early_stopping": 20,
    },
}

Note

If not given, default values are set to each elements.

Examples

>>> from smash.factory import load_dataset
>>> setup, mesh = load_dataset("cance")
>>> model = smash.Model(setup, mesh)

Define a callback function to store the control vector solutions during the optimization process

>>> iter_control = []
>>> def callback(iopt, icontrol=iter_control):
...     icontrol.append(iopt.control_vector)

Optimize the Model

>>> model.optimize(callback=callback)

Retrieve the model parameters from the control vector solution at the first iteration

>>> model.set_control_optimize(iter_control[0])

Perform a forward run to update the hydrological responses and final states

>>> model.forward_run()