Calibration Experiment

When you have your model structure in place, you may wish to tune some parameters of the model so that its behavior in particular conditions matches a known (historical) pattern. In case there are several parameters to tune it makes sense to use the built-in optimizer to search for the best combination. The objective in this case is to minimize the difference between the observed simulation output and historic data. 

Calibration experiment uses optimizer to find the model parameter values that correspond to the simulation output best fitting with the given data. The data may be both in scalar and dataset form. Coefficients may be used in case of multiple criteria. The calibration progress and fitting of each criterion are displayed.

You can control the Calibration experiment with Java code. Refer to the Calibration experiment functions section for details.

Creating a calibration experiment

To create a calibration experiment
  1. In the Projects view, right-click (Mac OS: Ctrl+click) the model item and choose New >  Experiment from the popup menu. The New Experiment dialog box is displayed.
  1. Choose  Calibration option in the Experiment Type list.
  2. Type the experiment name in the Name edit box.
  3. Choose the top-level agent of the experiment from the Top-level agent drop-down list.
  4. If you want to apply model time settings from another experiment, leave the Copy model time settings from check box selected and choose the experiment in the drop-down list to the right.
  5. Click Next to go to the Parameters and Criteria page of the wizard. Here you choose parameters the optimizer will be allowed to vary and the calibration criteria.
  6. All parameters of the top level agent are listed in the Parameters table. Go to the row of the Parameters table containing the parameter you want make a varied parameter. Click the Type field and choose the type of the parameter other than fixed. Depending on the type of the parameter, the list of possible values may vary: design, int, discrete for integer parameters; continuous and discrete for double, etc. Specify the range for the parameter. Enter the parameter’s lower bound in the Min field and the parameter’s upper bound in the Max field. For discrete and design parameters, specify the parameter step in the Step field.
  7. In the Criteria table below, specify calibration criteria. Each criterion is defined in an individual row of the table. 
  8. Type the name of the criterion in the Title column. 
  9. Choose the type of the criterion in the Type cell. Two types of criteria are available: scalar for fitting single values and dataset to fit datasets. 
  10. In the Expression field, specify the dataset, or a scalar value that will store the simulation output data. Use root here to refer to the top level agent, e.g. root.myDataset
  11. In the Observed cell, specify the name of the dataset, or an expression that defines the data that will be used as a given (historical) pattern. You will be able to define the observed datasets on the next page. 
  12. In the Coefficient column, specify coefficients to combine and balance multiple criteria.
  13. Click Finish.
The calibration progress and fitting of each criterion are displayed in the default UI.

Properties

General

Name – The name of the experiment.
 Since AnyLogic generates Java class for each experiment, please follow Java naming guidelines and start the name with an uppercase letter. 

Ignore – If selected, the experiment is excluded from the model.

Top-level agent – Using the drop-down list, choose the top-level agent type for the experiment. The agent of this type will play a role of a root for the hierarchical tree of agents in your model.

Objective – The objective function you want to minimize or maximizeThe top level agent is accessible here as root.

Number of iterations – If selected, the calibration will be stopped, if the maximum number of simulations, specified in the field to the right, is exceeded.

Automatic stop – If selected, the calibration will be stopped, if the value of the objective function stops improving significantly (this option is named optimization autostop).

Maximum available memory – The maximum size of Java heap allocated for the model.

Create default UI – The button creates the default UI for the experiment. 
 Please do not press this button since it will delete the experiment UI created by the wizard and will create the default UI for optimization experiment that may not correspond to your task.

Parameters

Parameters – Here the user defines the set of optimization parameters (AKA decisio variables). The table lists all the parameters of the top level agent. To make a parameter a decisio variable, click in the Type field and choose the type of the optimization parameter other than fixed. Depending on the type of the parameter, the list of possible values may vary: design, int, discrete for integer parameters; continuous and discrete for double, etc. Then specify the range for the parameter. Enter the parameter’s lower bound in the Min field and the parameter’s upper bound in the Max field. For discrete and design parameters, specify the increment value in the Step field.

Model time

Stop – Defines, whether the model will Stop at specified timeStop at specified date, or it will Never stop. In the first two cases, the stop time is specified using the Stop time/Stop date controls.

Start time – The initial time for the simulation time horizon.

Start date – The initial calendar date for the simulation time horizon.

Stop time – The final time for the simulation time horizon (the number of model time units for model to run before it will be stopped).

Stop date – The initial calendar date for the simulation time horizon.

Additional optimization stop conditions  – Here you can define any number of additional optimization stop conditions. When any of these conditions will become true, optimization will be stopped. Condition can include checks of dataset mean confidence, variable values, etc. The top-level agent of the experiment can be accessed here as root, so if you want e.g. to stop the optimization when the variable plainVar of the experiment's top-level agent steps over the threshold, type here, say, root.plainVar>11. To make the condition active, select the checkbox in the corresponding row of the table.

Randomness

Random number generator –  Here you specify, whether you want to initialize random number generator for this model randomly or with some fixed seed. This makes sense for stochastic models. Stochastic models require a random seed value for the pseudorandom number generator. In this case model runs cannot be reproduced since the model random number generator is initialized with different values for each model run. Specifying the fixed seed value, you initialize the model random number generator with the same value for each model run, thus the model runs are reproducible. Moreover, here you can substitute AnyLogic default RNG with your own RNG.

Random seed (unique simulation runs) – If selected, the seed value of the random number generator is random. In this case random number generator is initialized with the same value for each model run, and the model runs are unique (non-reproducible).

Fixed seed (reproducible simulation runs) – If selected, the seed value of the random number generator is fixed (specify it in the Seed value field). In this case random number generator is initialized with the same value for each model run, and the model runs are reproducible.

Custom generator (subclass of Random) – If for any reason you are not satisfied with the quality of the default random number generator Random, you can substitute it with your own one. Just prepare your custom RNG (it should be a subclass of the Java class Random, e.g. MyRandom), choose this particular option and type the expression returning an instance of your RNG in the field on the right, for example: new MyRandom() or new MyRandom( 1234 ) You can find more information here.

Selection mode for simultaneous events – Here you can choose the order of execution for simultaneous events (that occur at the same moment of model time). Choose from: FIFO (in the order of scheduling), LIFO (in the order reverse to scheduling) or Random.

Constraints

Here the user can define constraints - additional restrictions imposed on the optimization parameters.

Constraints on simulation parameters (are tested before a simulation run) – The table defining the optimization constraints. A constraint is a condition defined upon optimization parameters. It defines a range for an optimization parameter. Each time the optimization engine generates a new set of values for the optimization parameters, it creates feasible solutions, satisfying this constraint; thus the space of searching is reduced, and the optimization is performed faster.

A constraint is a well-formed arithmetic expression describing a relationship between the optimization parameters. It always defines a limitation by specifying a lower or an upper bound, e.g. parameter1 >= 10.

Each constraint is defined in individual row of the table, and can be disabled by deselecting the corresponding checkbox in the first column.

Requirements

Here the user can requirements - additional restrictions imposed on the solutions found by the optimization engine.

Requirements (are tested after a simulation run to determine whether the solution is feasible) – The table definng the optimization requirements. A requirement is an additional restriction imposed on the solution found by the optimization engine. Requirements are checked at the end of each simulation, and if they are not met, the current parameter values are rejected. 
A requirement can also be a restriction on a response that requires its value to fall within a specified range. It may contain any variables, parameters, functions, etc. of the experiment's top level agent accessible in the expression field as root
Each requirement is defined in individual row of the table, and can be disabled by deselecting the corresponding checkbox in the first column.

Replications

Use replications – if selected, the OptQuest Engine will run several replications per one simulation.

Fixed number of replications – if selected, fixed number of replications will be run per each simulation.  

Replications per iteration – [enabled if Fixed number of replications is set] the fixed number of replications, which will be run per each simulation. 

Varying number of replications (stop replications after minimum replications, when confidence level is reached) – if selected, varying number of replications will be run per each simulation. When running a varying number of replications, you will specify minimum and maximum number of replications to be run. The OptQuest Engine will always run the minimum number of replications for a solution. OptQuest then determines if more replications are needed. The OptQuest Engines stops evaluating a solution when one of the following occurs:

Minimum replications  – [enabled if Varying number of replications is set] – the minimum number of replications the OptQuest Engine will always run per one simulation.

Maximum replications  – [enabled if Varying number of replications is set] – the maximum number of replications the OptQuest Engine can run per one simulation.

Confidence level  – [enabled if Varying number of replications is set] – the confidence level to be evaluated for the objective.

Error percent  – [enabled if Varying number of replications is set] –  the percent of the objective for which the confidence level is determined.

Window

Window properties define the appearance of the presentation window, that will be shown, when the user starts the experiment. Note that the size of the experiment window is defined using the model frame and applies to all experiments and agent types of the model.

Title – The title of the presentation window.

Enable zoom and panning – If selected, the user will be allowed to pan and zoom the presentation window.

Maximized size – If selected, the presentation window will be maximized on model launch.

Close confirmation – If selected, the dialog box asking for confirmation will be shown on closing the model window. This will prevent the user from closing the window by occasional clicking the window's close button.

Show Toolbar sections properties section defines, what sections of the toolbar of the presentation window are visible. To make some toolbar section visible, just select the corresponding checkbox.

Show Statusbar sections properties section defines, what sections of the status bar of the presentation window are visible. To make some status bar section visible, just select the corresponding checkbox.

Java actions

Initial experiment setup – The code executed on experiment setup.

Before each experiment run – The code executed before each simulation run.

Before simulation run– The code executed before simulation run. This code is run on setup of the model. At this moment the top-level agent of the model is already created, but the model is not started yet. You may perform here some actions with elements of the top-level agent, e.g assign actual parameter values here.

After simulation run – The code executed after simulation run. This code is executed when simulation engine finishes the model execution (Engine.finished() function is called). This code is not executed when you stop your model by clicking the Terminate execution button.

After iteration – The code executed after iteration run.

After experiment – The code executed after experiment run.

Advanced Java

Imports sectionimport statements needed for correct compilation of the experiment class' code. When Java code is generated, these statements are inserted before definition of the Java class.

Additional class code – Arbitrary member variables, nested classes, constants and methods are defined here. This code will be inserted into the experiment class definition. You can access these class data members anywhere within this experiment. 

Java machine arguments – Specify here Java machine arguments you want to apply on launching your model. You can find the detailed information on possible arguments at Java Sun Microsystems web site: http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html

Command-line arguments – Here you can specify command line arguments you want to pass to your model. You can get the values of passed argument values in the experiment's Additional class code using the method String[] getCommandLineArguments()

Advanced

Allow parallel evaluations – If the option is selected and the processor has several cores, AnyLogic will run several experiment iterations in parallel on different processor cores. Thereby performance is multiply increased and the experiment is performed significantly quicker. This feature is made controllable because in some rare cases parallel evaluations may affect the optimizer strategy so that more iterations are required to find the optimal solution.
Do not use static variables, collections, table functions and custom distributions (check that their advanced option Static is deselected), if you turn on parallel evaluations here.

Load top-level agent from snapshot – If selected, the experiment will load the model state from the snapshot file specified in the control to the right. The experiment will be started from the time when the model state was saved.