smash.Model.optimize#
- Model.optimize(mapping='uniform', optimizer=None, optimize_options=None, cost_options=None, common_options=None, return_options=None, callback=None)[source]#
Model assimilation using numerical optimization algorithms.
- Parameters:
- mappingstr, default ‘uniform’
Type of mapping. Should be one of
'uniform''distributed''multi-linear''multi-power''ann'
Hint
See the Mapping section.
- optimizerstr or None, default None
Name of optimizer. Should be one of
'sbs'(only for'uniform'mapping)'nelder-mead'(only for'uniform'mapping)'powell'(only for'uniform'mapping)'lbfgsb'(for all mappings except'ann')'adam'(for all mappings)'adagrad'(for all mappings)'rmsprop'(for all mappings)'sgd'(for all mappings)
Note
If not given, a default optimizer will be set as follows:
'sbs'for mapping ='uniform''lbfgsb'for mapping ='distributed','multi-linear','multi-power''adam'for mapping ='ann'
Hint
See the Optimization Algorithms section.
- optimize_optionsdict[str, Any] or None, default None
Dictionary containing optimization options for fine-tuning the optimization process. See
default_optimize_optionsto retrieve the default optimize options based on the mapping and optimizer.- parametersstr, list[str, …] or None, default None
Name of parameters to optimize. Should be one or a sequence of any key of:
Model.nn_parameters, if using a hybrid model structure (depending on hydrological_module)
>>> optimize_options = { "parameters": "cp", } >>> optimize_options = { "parameters": ["cp", "ct", "kexc", "llr"], }
Note
- If not given, all parameters in
Model.rr_parameters,Model.nn_parameters(if used) will be optimized.
- boundsdict[str, tuple[float, float]] or None, default None
Bounds on optimized parameters. A dictionary where the keys represent parameter names, and the values are pairs of
(min, max)values (i.e., a list or tuple) withminlower thanmax. The keys must be included in parameters.>>> optimize_options = { "bounds": { "cp": (1, 2000), "ct": (1, 1000), "kexc": (-10, 5) "llr": (1, 1000) }, }
Note
If not given, default bounds will be applied to each parameter. See
Model.get_rr_parameters_bounds,Model.get_rr_initial_states_bounds- control_tfmstr or None, default None
Transformation method applied to bounded parameters of the control vector. Should be one of
'keep''normalize''sbs'('sbs'optimizer only)
Note
If not given, the default control vector transformation is control_tfm =
'normalize'except for the'sbs'optimizer, where control_tfm ='sbs'. This options is not used when mapping is'ann'.- descriptordict[str, list[str, …]] or None, default None
Descriptors linked to optimized parameters. A dictionary where the keys represent parameter names, and the values are list of descriptor names. The keys must be included in parameters.
>>> optimize_options = { "descriptor": { "cp": ["slope", "dd"], "ct": ["slope"], "kexc": ["slope", "dd"], "llr": ["dd"], }, }
Note
If not given, all descriptors will be used for each parameter. This option is only be used when mapping is
'multi-linear'or'multi-power'. In case of'ann', all descriptors will be used.- net
Netor None, default None The regionalization neural network used to learn the descriptors-to-parameters mapping.
Note
If not given, a default neural network will be used. This option is only used when mapping is
'ann'. SeeNetto learn how to create a customized neural network for training.- learning_ratefloat or None, default None
The learning rate used for updating trainable parameters when using adaptive optimizers (i.e.,
'adam','adagrad','rmsprop','sgd').Note
If not given, a default learning rate for each optimizer will be used.
- random_stateint or None, default None
A random seed used to initialize neural network parameters.
Note
If not given, the neural network parameters will be initialized with a random seed. This options is only used when mapping is
'ann', and the weights and biases of net are not yet initialized.- termination_critdict[str, Any] or None, default None
Termination criteria. The elements are:
'maxiter': The maximum number of iterations.'xatol': Absolute error in solution parameters between iterations that is acceptable for convergence. Only used when optimizer is'nelder-mead'.'fatol': Absolute error in cost function value between iterations that is acceptable for convergence. Only used when optimizer is'nelder-mead'.'factr': An additional termination criterion based on cost values. Only used when optimizer is'lbfgsb'.'pgtol': An additional termination criterion based on the projected gradient of the cost function. Only used when optimizer is'lbfgsb'.'early_stopping': A positive number to stop training when the cost function does not decrease below the current optimal value for early_stopping consecutive iterations. When set to zero, early stopping is disabled, and the training continues for the full number of iterations. Only used for adaptive optimizers (i.e.,'adam','adagrad','rmsprop','sgd').
>>> optimize_options = { "termination_crit": { "maxiter": 10, "factr": 1e6, }, } >>> optimize_options = { "termination_crit": { "maxiter": 200, "early_stopping": 20, }, }
Note
If not given, default values are set to each elements.
- cost_optionsdict[str, Any] or None, default None
Dictionary containing computation cost options for simulated and observed responses. The elements are:
- jobs_cmptstr or list[str, …], default ‘nse’
Type of observation objective function(s) to be computed. Should be one or a sequence of any of
'nse','nnse','kge','mae','mape','mse','rmse','lgrm'(classical evaluation metrics)'Crc','Crchf','Crclf','Crch2r','Cfp2','Cfp10','Cfp50','Cfp90'(continuous signatures-based error metrics)'Eff','Ebf','Erc','Erchf','Erclf','Erch2r','Elt','Epf'(flood event signatures-based error metrics)
>>> cost_options = { "jobs_cmpt": "nse", } >>> cost_options = { "jobs_cmpt": ["nse", "Epf"], }
Hint
See the Efficiency & Error Metric and Hydrological Signature sections
- wjobs_cmptstr or list[float, …], default ‘mean’
The corresponding weighting of observation objective functions in case of multi-criteria (i.e., a sequence of objective functions to compute). There are two ways to specify it:
An alias among
'mean'A sequence of value whose size must be equal to the number of observation objective function(s) in jobs_cmpt
>>> cost_options = { "wjobs_cmpt": "mean", } >>> cost_options = { "wjobs_cmpt": [0.7, 0.3], }
- jobs_cmpt_tfmstr or list[str, …], default ‘keep’
Type of transformation applied to discharge in observation objective function(s). Should be one or a sequence of any of
'keep': No transformation \(f:x \rightarrow x\)'sqrt': Square root transformation \(f:x \rightarrow \sqrt{x}\)'inv': Multiplicative inverse transformation \(f:x \rightarrow \frac{1}{x}\)
>>> cost_options = { "jobs_cmpt_tfm": "inv", } >>> cost_options = { "jobs_cmpt_tfm": ["keep", "inv"], }
Note
If jobs_cmpt is a list of multi-objective functions, and only one transformation is chosen in jobs_cmpt_tfm, the transformation will be applied to each observation objective function.
- wjregfloat or str, default 0
The weighting of regularization term. There are two ways to specify it:
A value greater than or equal to 0
An alias among
'fast'or'lcurve'. wjreg will be auto-computed by one of these methods.
>>> cost_options = { "wjreg": 1e-4, } >>> cost_options = { "wjreg": "lcurve", }
Hint
See the Regularization weighting coefficient section
- jreg_cmptstr or list[str, …], default ‘prior’
Type(s) of regularization function(s) to be minimized when regularization term is set (i.e., wjreg > 0). Should be one or a sequence of any of
'prior''smoothing''hard-smoothing'
>>> cost_options = { "jreg_cmpt": "prior", } >>> cost_options = { "jreg_cmpt": ["prior", "smoothing"], }
Hint
See the Regularization Function section
- wjreg_cmptstr or list[float, …], default ‘mean’
The corresponding weighting of regularization functions in case of multi-regularization (i.e., a sequence of regularization functions to compute). There are two ways to specify it:
An alias among
'mean'A sequence of value whose size must be equal to the number of regularization function(s) in jreg_cmpt
>>> cost_options = { "wjreg_cmpt": "mean", } >>> cost_options = { "wjreg_cmpt": [1., 2.], }
- end_warmupstr,
pandas.Timestampor None, default None The end of the warm-up period, which must be between the start time and the end time defined in
Model.setup.>>> cost_options = { "end_warmup": "1997-12-21", } >>> cost_options = { "end_warmup": pd.Timestamp("19971221"), }
Note
If not given, it is set to be equal to the
Model.setupstart time.- gaugestr or list[str, …], default ‘dws’
Type of gauge to be computed. There are two ways to specify it:
An alias among
'all'(all gauge codes) or'dws'(most downstream gauge code(s))A gauge code or any sequence of gauge codes. The gauge code(s) given must belong to the gauge codes defined in the
Model.mesh
>>> cost_options = { "gauge": "dws", } >>> cost_options = { "gauge": "V3524010", } >>> cost_options = { "gauge": ["V3524010", "V3515010"], }
- wgaugestr or list[float, …] default ‘mean’
Type of gauge weights. There are two ways to specify it:
An alias among
'mean','lquartile'(1st quantile or lower quantile),'median', or'uquartile'(3rd quantile or upper quantile)A sequence of value whose size must be equal to the number of gauges optimized in gauge
>>> cost_options = { "wgauge": "mean", } >>> cost_options = { "wgauge": [0.6, 0.4]", }
- event_segdict[str, float], default {‘peak_quant’: 0.995, ‘max_duration’: 240}
A dictionary of event segmentation options when calculating flood event signatures for cost computation (i.e., jobs_cmpt includes flood events signatures).
>>> cost_options = { "event_seg": { "peak_quant": 0.998, "max_duration": 120, } }
Hint
See the
hydrograph_segmentationfunction and Hydrograph Segmentation section.
- common_optionsdict[str, Any] or None, default None
Dictionary containing common options with two elements:
- ncpuint, default 1
Number of CPU(s) to perform a parallel computation.
Warning
Parallel computation is not supported on
Windows.- verbosebool, default False
Whether to display information about the running method.
- return_optionsdict[str, Any] or None, default None
Dictionary containing return options to save additional simulation results. The elements are:
- time_stepstr,
pandas.Timestamp,pandas.DatetimeIndexor list[str, …], default ‘all’ Returned time steps. There are five ways to specify it:
An alias among
'all'(return all time steps).A date as string which respect
pandas.TimestampformatA sequence of dates as strings.
>>> return_options = { "time_step": "all", } >>> return_options = { "time_step": "1997-12-21", } >>> return_options = { "time_step": pd.Timestamp("19971221"), } >>> return_options = { "time_step": pd.date_range( start="1997-12-21", end="1998-12-21", freq="1D" ), } >>> return_options = { "time_step": ["1998-05-23", "1998-05-24", "1998-05-25"], }
Note
It only applies to the following variables:
'rr_states'and'q_domain'- rr_statesbool, default False
Whether to return rainfall-runoff states for specific time steps.
- q_domainbool, default False
Whether to return simulated discharge on the whole domain for specific time steps.
- internal_fluxesbool, default False
Whether to return internal fluxes depending on the model structure on the whole domain for specific time steps.
- control_vectorbool, default False
Whether to return the control vector solution of the optimization (it can be transformed).
- netbool, default False
Whether to return the trained neural network
Net. Only used with'ann'mapping.- costbool, default False
Whether to return cost value.
- n_iterbool, default False
Whether to return the number of iterations performed.
- projgbool, default False
Whether to return the projected gradient value (infinity norm of the Jacobian matrix).
- jobsbool, default False
Whether to return jobs (observation component of cost) value.
- jregbool, default False
Whether to return jreg (regularization component of cost) value.
- lcurve_wjregbool, default False
Whether to return the wjreg L-curve. Only used if wjreg in cost_options is equal to
'lcurve'.
- time_stepstr,
- callbackcallable or None, default None
A callable called after each iteration with the signature
callback(iopt: Optimize), whereioptis a keyword argument representing an instance of theOptimizeclass that contains intermediate optimization results with attributes:'control_vector': The current control vector.'cost': The current cost value.'n_iter': The current number of iterations performed by the optimizer.'projg': The current projected gradient, available if using gradient-based optimizers.'net': The regionalization neural network state, available if using'ann'mapping.
>>> import numpy as np >>> iter_cost = [] # to get the cost values through iterations >>> def callback_func(iopt, icost=iter_cost): ... icost.append(iopt.cost) ... # save the current control vector value to a text file ... np.savetxt(f"control_iter_{len(icost)}.txt", iopt.control_vector) >>> callback = callback_func
Note
The name of the argument must be
ioptfor the callback to be passed as anOptimizeobject.
- Returns:
- optimize
Optimizeor None, default None It returns an object containing additional simulation results with the keys defined in return_options. If no keys are defined, it returns None.
- optimize
See also
OptimizeRepresents optimize optional results.
Examples
>>> from smash.factory import load_dataset >>> setup, mesh = load_dataset("cance") >>> model = smash.Model(setup, mesh)
Optimize the Model
>>> model.optimize() </> Optimize At iterate 0 nfg = 1 J = 6.95010e-01 ddx = 0.64 At iterate 1 nfg = 30 J = 9.84107e-02 ddx = 0.64 At iterate 2 nfg = 59 J = 4.54087e-02 ddx = 0.32 At iterate 3 nfg = 88 J = 3.81818e-02 ddx = 0.16 At iterate 4 nfg = 117 J = 3.73617e-02 ddx = 0.08 At iterate 5 nfg = 150 J = 3.70873e-02 ddx = 0.02 At iterate 6 nfg = 183 J = 3.68004e-02 ddx = 0.02 At iterate 7 nfg = 216 J = 3.67635e-02 ddx = 0.01 At iterate 8 nfg = 240 J = 3.67277e-02 ddx = 0.01 CONVERGENCE: DDX < 0.01
Get the simulated discharges
>>> model.response.q array([[5.8217300e-04, 4.7552472e-04, 3.5390016e-04, ..., 1.9405001e+01, 1.9179874e+01, 1.8959581e+01], [1.2144940e-04, 6.6219603e-05, 3.0706153e-05, ..., 4.7972722e+00, 4.7477250e+00, 4.6991367e+00], [1.9631812e-05, 6.9778694e-06, 2.2202112e-06, ..., 1.2500964e+00, 1.2371680e+00, 1.2244837e+00]], dtype=float32)