smash.Model.optimize#

Model.optimize(mapping='uniform', optimizer=None, optimize_options=None, cost_options=None, common_options=None, return_options=None)[source]#

Model assimilation using numerical optimization algorithms.

Parameters:
mappingstr, default ‘uniform’

Type of mapping. Should be one of

  • 'uniform'

  • 'distributed'

  • 'multi-linear'

  • 'multi-polynomial'

  • 'ann'

Hint

See the Mapping section

optimizerstr or None, default None

Name of optimizer. Should be one of

  • 'sbs' ('uniform' mapping only)

  • 'lbfgsb' ('uniform', 'distributed', 'multi-linear' or 'multi-polynomial' mapping only)

  • 'sgd' ('ann' mapping only)

  • 'adam' ('ann' mapping only)

  • 'adagrad' ('ann' mapping only)

  • 'rmsprop' ('ann' mapping only)

Note

  • mapping = 'uniform'; optimizer = 'sbs'

  • mapping = 'distributed', 'multi-linear', or 'multi-polynomial'; optimizer = 'lbfgsb'

  • mapping = 'ann'; optimizer = 'adam'

Hint

See the Optimization Algorithm section

optimize_optionsdict[str, Any] or None, default None

Dictionary containing optimization options for fine-tuning the optimization process. See default_optimize_options to retrieve the default optimize options based on the mapping and optimizer.

parametersstr, list[str, …] or None, default None

Name of parameters to optimize. Should be one or a sequence of any key of:

>>> optimize_options = {
    "parameters": "cp",
}
>>> optimize_options = {
    "parameters": ["cp", "ct", "kexc", "llr"],
}

Note

If not given, all parameters in Model.rr_parameters will be optimized.

boundsdict[str, tuple[float, float]] or None, default None

Bounds on optimized parameters. A dictionary where the keys represent parameter names, and the values are pairs of (min, max) values (i.e., a list or tuple) with min lower than max. The keys must be included in parameters.

>>> optimize_options = {
    "bounds": {
        "cp": (1, 2000),
        "ct": (1, 1000),
        "kexc": (-10, 5)
        "llr": (1, 1000)
    },
}

Note

If not given, default bounds will be applied to each parameter. See Model.get_rr_parameters_bounds, Model.get_rr_initial_states_bounds

control_tfmstr or None, default None

Transformation method applied to the control vector. Only used with 'sbs' or 'lbfgsb' optimizer. Should be one of:

  • 'keep'

  • 'normalize'

  • 'sbs' ('sbs' optimizer only)

Note

If not given, a default control vector transformation will be set depending on the optimizer:

  • optimizer = 'sbs'; control_tfm = 'sbs'

  • optimizer = 'lbfgsb'; control_tfm = 'normalize'

descriptordict[str, list[str, …]] or None, default None

Descriptors linked to optimized parameters. A dictionary where the keys represent parameter names, and the values are list of descriptor names. The keys must be included in parameters.

>>> optimize_options = {
    "descriptor": {
        "cp": ["slope", "dd"],
        "ct": ["slope"],
        "kexc": ["slope", "dd"],
        "llr": ["dd"],
    },
}

Note

If not given, all descriptors will be used for each parameter. This option is only be used when mapping is 'multi-linear' or 'multi-polynomial'. In case of 'ann', all descriptors will be used.

netNet or None, default None

The neural network used to learn the descriptors-to-parameters mapping.

Note

If not given, a default neural network will be used. This option is only used when mapping is 'ann'. See Net to learn how to create a customized neural network for training.

learning_ratefloat or None, default None

The learning rate used for weight updates during training.

Note

If not given, a default learning rate will be used. This option is only used when mapping is 'ann'.

random_stateint or None, default None

A random seed used to initialize neural network weights.

Note

If not given, the weights will be initialized with a random seed. This options is only used when mapping is 'ann'.

termination_critdict[str, Any] or None, default None

Termination criteria. The elements are:

  • 'maxiter': The maximum number of iterations. Only used when optimizer is 'sbs' or 'lbfgsb'.

  • 'factr': An additional termination criterion based on cost values. Only used when optimizer is 'lbfgsb'.

  • 'pgtol': An additional termination criterion based on the projected gradient of the cost function. Only used when optimizer is 'lbfgsb'.

  • 'epochs': The number of training epochs for the neural network. Only used when mapping is 'ann'.

  • 'early_stopping': A positive number to stop training when the loss function does not decrease below the current optimal value for early_stopping consecutive epochs. When set to zero, early stopping is disabled, and the training continues for the full number of epochs. Only used when mapping is 'ann'.

>>> optimize_options = {
    "termination_crit": {
        "maxiter": 10,
        "factr": 1e6,
    },
}
>>> optimize_options = {
    "termination_crit": {
        "epochs": 200,
    },
}

Note

If not given, default values are set to each elements.

cost_optionsdict[str, Any] or None, default None

Dictionary containing computation cost options for simulated and observed responses. The elements are:

jobs_cmptstr or list[str, …], default ‘nse’

Type of observation objective function(s) to be computed. Should be one or a sequence of any of

  • 'nse', 'nnse', 'kge', 'mae', 'mape', 'mse', 'rmse', 'lgrm' (classical evaluation metrics)

  • 'Crc', 'Crchf', 'Crclf', 'Crch2r', 'Cfp2', 'Cfp10', 'Cfp50', 'Cfp90' (continuous signatures-based error metrics)

  • 'Eff', 'Ebf', 'Erc', 'Erchf', 'Erclf', 'Erch2r', 'Elt', 'Epf' (flood event signatures-based error metrics)

>>> cost_options = {
    "jobs_cmpt": "nse",
}
>>> cost_options = {
    "jobs_cmpt": ["nse", "Epf"],
}
wjobs_cmptstr or list[float, …], default ‘mean’

The corresponding weighting of observation objective functions in case of multi-criteria (i.e., a sequence of objective functions to compute). There are two ways to specify it:

  • An alias among 'mean'

  • A sequence of value whose size must be equal to the number of observation objective function(s) in jobs_cmpt

>>> cost_options = {
    "wjobs_cmpt": "mean",
}
>>> cost_options = {
    "wjobs_cmpt": [0.7, 0.3],
}
jobs_cmpt_tfmstr or list[str, …], default ‘keep’

Type of transformation applied to discharge in observation objective function(s). Should be one or a sequence of any of

  • 'keep' : No transformation \(f:x \rightarrow x\)

  • 'sqrt' : Square root transformation \(f:x \rightarrow \sqrt{x}\)

  • 'inv' : Multiplicative inverse transformation \(f:x \rightarrow \frac{1}{x}\)

>>> cost_options = {
    "jobs_cmpt_tfm": "inv",
}
>>> cost_options = {
    "jobs_cmpt_tfm": ["keep", "inv"],
}

Note

If jobs_cmpt is a multi-criteria and only one transformation is choosen in jobs_cmpt_tfm. The transformation will be applied to each observation objective function.

wjregfloat or str, default 0

The weighting of regularization term. There are two ways to specify it:

  • A value greater than or equal to 0

  • An alias among 'fast' or 'lcurve'. wjreg will be auto-computed by one of these methods.

>>> cost_options = {
    "wjreg": 1e-4,
}
>>> cost_options = {
    "wjreg": "lcurve",
}
jreg_cmptstr or list[str, …], default ‘prior’

Type(s) of regularization function(s) to be minimized when regularization term is set (i.e.,**wjreg** > 0). Should be one or a sequence of any of

  • 'prior'

  • 'smoothing'

  • 'hard-smoothing'

>>> cost_options = {
    "jreg_cmpt": "prior",
}
>>> cost_options = {
    "jreg_cmpt": ["prior", "smoothing"],
}

Hint

See the Regularization Function section

wjreg_cmptstr or list[float, …], default ‘mean’

The corresponding weighting of regularization functions in case of multi-regularization (i.e., a sequence of regularization functions to compute). There are two ways to specify it:

  • An alias among 'mean'

  • A sequence of value whose size must be equal to the number of regularization function(s) in jreg_cmpt

>>> cost_options = {
    "wjreg_cmpt": "mean",
}
>>> cost_options = {
    "wjreg_cmpt": [1., 2.],
}
end_warmupstr, pandas.Timestamp or None, default None

The end of the warm-up period, which must be between the start time and the end time defined in Model.setup.

>>> cost_options = {
    "end_warmup": "1997-12-21",
}
>>> cost_options = {
    "end_warmup": pd.Timestamp("19971221"),
}

Note

If not given, it is set to be equal to the Model.setup start time.

gaugestr or list[str, …], default ‘dws’

Type of gauge to be computed. There are two ways to specify it:

  • An alias among 'all' (all gauge codes) or 'dws' (most downstream gauge code(s))

  • A gauge code or any sequence of gauge codes. The gauge code(s) given must belong to the gauge codes defined in the Model.mesh

>>> cost_options = {
    "gauge": "dws",
}
>>> cost_options = {
    "gauge": "V3524010",
}
>>> cost_options = {
    "gauge": ["V3524010", "V3515010"],
}
wgaugestr or list[float, …] default ‘mean’

Type of gauge weights. There are two ways to specify it:

  • An alias among 'mean', 'lquartile' (1st quantile or lower quantile), 'median', or 'uquartile' (3rd quantile or upper quantile)

  • A sequence of value whose size must be equal to the number of gauges optimized in gauge

>>> cost_options = {
    "wgauge": "mean",
}
>>> cost_options = {
    "wgauge": [0.6, 0.4]",
}
event_segdict[str, float], default {‘peak_quant’: 0.995, ‘max_duration’: 240}

A dictionary of event segmentation options when calculating flood event signatures for cost computation (i.e., jobs_cmpt includes flood events signatures).

>>> cost_options = {
    event_seg = {
        "peak_quant": 0.998,
        "max_duration": 120,
    }
}

Hint

See the hydrograph_segmentation function and Hydrograph Segmentation section.

common_optionsdict[str, Any] or None, default None

Dictionary containing common options with two elements:

ncpuint, default 1

Number of CPU(s) to perform a parallel computation.

Warning

Parallel computation is not supported on Windows.

verbosebool, default False

Whether to display information about the running method.

return_optionsdict[str, Any] or None, default None

Dictionary containing return options to save intermediate variables. The elements are:

time_stepstr, pandas.Timestamp, pandas.DatetimeIndex or list[str, …], default ‘all’

Returned time steps. There are five ways to specify it:

>>> return_options = {
    "time_step": "all",
}
>>> return_options = {
    "time_step": "1997-12-21",
}
>>> return_options = {
    "time_step": pd.Timestamp("19971221"),
}
>>> return_options = {
    "time_step": pd.date_range(
        start="1997-12-21",
        end="1998-12-21",
        freq="1D"
    ),
}
>>> return_options = {
    "time_step": ["1998-05-23", "1998-05-24", "1998-05-25"],
}

Note

It only applies to the following variables: 'rr_states' and 'q_domain'

rr_statesbool, default False

Whether to return rainfall-runoff states for specific time steps.

q_domainbool, default False

Whether to return simulated discharge on the whole domain for specific time steps.

iter_costbool, default False

Whether to return cost iteration values.

iter_projgbool, default False

Whether to return infinity norm of the projected gardient iteration values.

control_vectorbool, default False

Whether to return control vector at end of optimization. In case of optimization with 'ann' mapping, the control vector is represented in Net.layers instead.

netbool, default False

Whether to return the trained neural network Net. Only used with 'ann' mapping.

costbool, default False

Whether to return cost value.

jobsbool, default False

Whether to return jobs (observation component of cost) value.

jregbool, default False

Whether to return jreg (regularization component of cost) value.

lcurve_wjregbool, default False

Whether to return the wjreg lcurve. Only used if wjreg in cost_options is equal to 'lcurve'.

Returns:
optimizeOptimize or None, default None

It returns an object containing the intermediate variables defined in return_options. If no intermediate variables are defined, it returns None.

See also

Optimize

Represents optimize optional results.

Examples

>>> from smash.factory import load_dataset
>>> setup, mesh = load_dataset("cance")
>>> model = smash.Model(setup, mesh)

Optimize the Model

>>> model.optimize()
</> Optimize
    At iterate      0    nfg =     1    J =      0.695010    ddx = 0.64
    At iterate      1    nfg =    30    J =      0.098411    ddx = 0.64
    At iterate      2    nfg =    59    J =      0.045409    ddx = 0.32
    At iterate      3    nfg =    88    J =      0.038182    ddx = 0.16
    At iterate      4    nfg =   117    J =      0.037362    ddx = 0.08
    At iterate      5    nfg =   150    J =      0.037087    ddx = 0.02
    At iterate      6    nfg =   183    J =      0.036800    ddx = 0.02
    At iterate      7    nfg =   216    J =      0.036763    ddx = 0.01
    CONVERGENCE: DDX < 0.01

Get the simulated discharges

>>> model.response.q
array([[5.8217382e-04, 4.7552516e-04, 3.5390016e-04, ..., 1.9439360e+01,
        1.9214035e+01, 1.8993553e+01],
       [1.2144950e-04, 6.6219603e-05, 3.0706105e-05, ..., 4.8059664e+00,
        4.7563825e+00, 4.7077618e+00],
       [1.9631827e-05, 6.9778653e-06, 2.2202073e-06, ..., 1.2523955e+00,
        1.2394531e+00, 1.2267693e+00]], dtype=float32)