smash.Model.bayesian_optimize#
- Model.bayesian_optimize(mapping='uniform', optimizer=None, optimize_options=None, cost_options=None, common_options=None, return_options=None, callback=None)[source]#
Model bayesian assimilation using numerical optimization algorithms.
- Parameters:
- mappingstr, default ‘uniform’
Type of mapping. Should be one of
'uniform''distributed''multi-linear''multi-power'
Hint
See the Mapping section.
- optimizerstr or None, default None
Name of optimizer. Should be one of
'sbs'(only for'uniform'mapping)'nelder-mead'(only for'uniform'mapping)'powell'(only for'uniform'mapping)'lbfgsb'(for all mappings)'adam'(for all mappings)'adagrad'(for all mappings)'rmsprop'(for all mappings)'sgd'(for all mappings)
Note
If not given, a default optimizer will be set as follows:
'sbs'for mapping ='uniform''lbfgsb'for mapping ='distributed','multi-linear','multi-power'
Hint
See the Optimization Algorithms section.
- optimize_optionsdict[str, Any] or None, default None
Dictionary containing optimization options for fine-tuning the optimization process. See
default_bayesian_optimize_optionsto retrieve the default optimize options based on the mapping and optimizer.- parametersstr, list[str, …] or None, default None
Name of parameters to optimize. Should be one or a sequence of any key of:
Model.nn_parameters, if using a hybrid model structure (depending on hydrological_module)
>>> optimize_options = { "parameters": "cp", } >>> optimize_options = { "parameters": ["cp", "ct", "kexc", "llr"], }
Note
If not given, all parameters in
Model.rr_parameters,Model.nn_parameters(if used) ,Model.serr_mu_parameters,Model.serr_sigma_parameterswill be optimized.- boundsdict[str, tuple[float, float]] or None, default None
Bounds on optimized parameters. A dictionary where the keys represent parameter names, and the values are pairs of
(min, max)values (i.e., a list or tuple) withminlower thanmax. The keys must be included in parameters.>>> optimize_options = { "bounds": { "cp": (1, 2000), "ct": (1, 1000), "kexc": (-10, 5) "llr": (1, 1000) }, }
Note
If not given, default bounds will be applied to each parameter. See
Model.get_rr_parameters_bounds,Model.get_rr_initial_states_bounds,Model.get_serr_mu_parameters_boundsandModel.get_serr_sigma_parameters_bounds- control_tfmstr or None, default None
Transformation method applied to bounded parameters of the control vector. Should be one of
'keep''normalize''sbs'('sbs'optimizer only)
Note
If not given, the default control vector transformation is control_tfm =
'normalize'except for the'sbs'optimizer, where control_tfm ='sbs'. This options is not used when mapping is'ann'.- descriptordict[str, list[str, …]] or None, default None
Descriptors linked to optimized parameters. A dictionary where the keys represent parameter names, and the values are list of descriptor names. The keys must be included in parameters.
>>> optimize_options = { "descriptor": { "cp": ["slope", "dd"], "ct": ["slope"], "kexc": ["slope", "dd"], "llr": ["dd"], }, }
Note
If not given, all descriptors will be used for each parameter. This option is only be used when mapping is
'multi-linear'or'multi-power'. In case of'ann', all descriptors will be used.- termination_critdict[str, Any] or None, default None
Termination criteria. The elements are:
'maxiter': The maximum number of iterations.'xatol': Absolute error in solution parameters between iterations that is acceptable for convergence. Only used when optimizer is'nelder-mead'.'fatol': Absolute error in cost function value between iterations that is acceptable for convergence. Only used when optimizer is'nelder-mead'.'factr': An additional termination criterion based on cost values. Only used when optimizer is'lbfgsb'.'pgtol': An additional termination criterion based on the projected gradient of the cost function. Only used when optimizer is'lbfgsb'.'early_stopping': A positive number to stop training when the cost function does not decrease below the current optimal value for early_stopping consecutive iterations. When set to zero, early stopping is disabled, and the training continues for the full number of iterations. Only used for adaptive optimizers (i.e.,'adam','adagrad','rmsprop','sgd').
>>> optimize_options = { "termination_crit": { "maxiter": 10, "factr": 1e6, }, } >>> optimize_options = { "termination_crit": { "maxiter": 200, "early_stopping": 20, }, }
Note
If not given, default values are set to each elements.
- cost_optionsdict[str, Any] or None, default None
Dictionary containing computation cost options for simulated and observed responses. The elements are:
- end_warmupstr,
pandas.Timestampor None, default None The end of the warm-up period, which must be between the start time and the end time defined in
Model.setup.>>> cost_options = { "end_warmup": "1997-12-21", } >>> cost_options = { "end_warmup": pd.Timestamp("19971221"), }
Note
If not given, it is set to be equal to the
Model.setupstart time.- gaugestr or list[str, …], default ‘dws’
Type of gauge to be computed. There are two ways to specify it:
An alias among
'all'(all gauge codes) or'dws'(most downstream gauge code(s))A gauge code or any sequence of gauge codes. The gauge code(s) given must belong to the gauge codes defined in the
Model.mesh
>>> cost_options = { "gauge": "dws", } >>> cost_options = { "gauge": "V3524010", } >>> cost_options = { "gauge": ["V3524010", "V3515010"], }
- control_priordict[str, list[str, list[float]]] or None, default None
Prior applied to the control vector. A dictionary containing the type of prior to link to control vector. The keys are any control parameter name (i.e.
'cp-0','cp-1-1','cp-slope-a', etc.), seebayesian_optimize_control_infoto retrieve control parameters names. The values are list of length 2 containing distribution information (i.e. distribution name and parameters). Below, the set of available distributions and the associated number of parameters:'FlatPrior', [] (0)'Uniform', [lower_bound, higher_bound] (2)'Gaussian', [mean, standard_deviation] (2)'Exponential', [threshold, scale] (2)'LogNormal', [mean_log, standard_deviation_log] (2)'Triangle', [peak, lower_bound, higher_bound] (3)
>>> cost_options = { "control_prior": { "cp-0": ["Gaussian", [200, 100]], "kexc-0": ["Gaussian", [0, 5]], } }
Note
If not given,
'FlatPrior'is applied to each control vector parameter (i.e. equivalent to no prior).Hint
See a more detailed explanation on the available distributions in Bayesian Estimation section.
- end_warmupstr,
- common_optionsdict[str, Any] or None, default None
Dictionary containing common options with two elements:
- ncpuint, default 1
Number of CPU(s) to perform a parallel computation.
Warning
Parallel computation is not supported on
Windows.- verbosebool, default False
Whether to display information about the running method.
- return_optionsdict[str, Any] or None, default None
Dictionary containing return options to save additional simulation results. The elements are:
- time_stepstr,
pandas.Timestamp,pandas.DatetimeIndexor list[str, …], default ‘all’ Returned time steps. There are five ways to specify it:
An alias among
'all'(return all time steps).A date as string which respect
pandas.TimestampformatA sequence of dates as strings.
>>> return_options = { "time_step": "all", } >>> return_options = { "time_step": "1997-12-21", } >>> return_options = { "time_step": pd.Timestamp("19971221"), } >>> return_options = { "time_step": pd.date_range( start="1997-12-21", end="1998-12-21", freq="1D" ), } >>> return_options = { "time_step": ["1998-05-23", "1998-05-24", "1998-05-25"], }
Note
It only applies to the following variables:
'rr_states'and'q_domain'- rr_statesbool, default False
Whether to return rainfall-runoff states for specific time steps.
- q_domainbool, default False
Whether to return simulated discharge on the whole domain for specific time steps.
- internal_fluxesbool, default False
Whether to return internal fluxes depending on the model structure on the whole domain for specific time steps.
- control_vectorbool, default False
Whether to return the control vector solution of the optimization (it can be transformed).
- costbool, default False
Whether to return cost value.
- n_iterbool, default False
Whether to return the number of iterations performed.
- projgbool, default False
Whether to return the projected gradient value (infinity norm of the Jacobian matrix).
- log_lkhbool, default False
Whether to return log likelihood component value.
- log_priorbool, default False
Whether to return log prior component value.
- log_hbool, default False
Whether to return log h component value.
- serr_mubool, default False
Whether to return mu, the mean of structural errors. It can also be returned directly from the Model object using the
Model.get_serr_mumethod.- serr_sigmabool, default False
Whether to return sigma, the standard deviation of structural errors. It can also be returned directly from the Model object using the
Model.get_serr_sigmamethod.
- time_stepstr,
- callbackcallable or None, default None
A callable called after each iteration with the signature
callback(iopt: BayesianOptimize), whereioptis a keyword argument representing an instance of theBayesianOptimizeclass that contains intermediate optimization results with attributes:'control_vector': The current control vector.'cost': The current cost value.'n_iter': The current number of iterations performed by the optimizer.'projg': The current projected gradient, available if using gradient-based optimizers.
>>> import numpy as np >>> iter_cost = [] # to get the cost values through iterations >>> def callback_func(iopt, icost=iter_cost): ... icost.append(iopt.cost) ... # save the current control vector value to a text file ... np.savetxt(f"control_iter_{len(icost)}.txt", iopt.control_vector) >>> callback = callback_func
Note
The name of the argument must be
ioptfor the callback to be passed as aBayesianOptimizeobject.
- Returns:
- bayesian_optimize
BayesianOptimizeor None, default None It returns an object containing additional simulation results with the keys defined in return_options. If no keys are defined, it returns None.
- bayesian_optimize
See also
BayesianOptimizeRepresents bayesian optimize optional results.
Examples
>>> from smash.factory import load_dataset >>> setup, mesh = load_dataset("cance") >>> model = smash.Model(setup, mesh)
Optimize the Model
>>> model.bayesian_optimize() </> Bayesian Optimize At iterate 0 nfg = 1 J = 7.70491e+01 ddx = 0.64 At iterate 1 nfg = 68 J = 2.58460e+00 ddx = 0.64 At iterate 2 nfg = 135 J = 2.32432e+00 ddx = 0.32 At iterate 3 nfg = 202 J = 2.30413e+00 ddx = 0.08 At iterate 4 nfg = 269 J = 2.26219e+00 ddx = 0.08 At iterate 5 nfg = 343 J = 2.26025e+00 ddx = 0.01 At iterate 6 nfg = 416 J = 2.25822e+00 ddx = 0.01 CONVERGENCE: DDX < 0.01
Get the simulated discharges:
>>> model.response.q array([[3.8725790e-04, 3.5435968e-04, 3.0995542e-04, ..., 1.9623449e+01, 1.9391096e+01, 1.9163761e+01], [9.0669666e-05, 6.3609048e-05, 3.9684954e-05, ..., 4.7896299e+00, 4.7395458e+00, 4.6904192e+00], [1.6136990e-05, 7.8192916e-06, 3.4578943e-06, ..., 1.2418084e+00, 1.2288600e+00, 1.2161493e+00]], dtype=float32)