Compare Runs Experiment

How-To video: Compare Runs experiment

This is an interactive experiment that allows you to input the model parameters, run simulation, and add the simulation output to the charts where they can be compared with the results of other runs.

The default UI for this experiment includes the input fields and the output charts. You can choose a particular output result, click on its chart, and display the corresponding parameter values.

You can control the Compare runs experiment with Java code. Refer to the Compare runs experiment functions section for details.

Demo model: Compare Runs Experiment

Creating a compare runs experiment

To create a compare runs experiment
  1. In the Projects view, right-click (Mac OS: Ctrl+click) the model item and choose New >  Experiment from the popup menu. The New Experiment dialog box is displayed.
  1. Choose  Compare Runs option in the Experiment Type list.
  2. Type the experiment name in the Name edit box.
  3. Choose the top-level agent of the experiment from the Top-level agent drop-down list.
  4. If you want to apply model time settings from another experiment, leave the Copy model time settings from check box selected and choose the experiment in the drop-down list to the right.
  5. Click Next when finished. The Parameters page of the wizard will open. Here you specify the parameters of the top level agent, whose values you will be able to change between experiment runs. 
  6. Add the parameters you need from the Available list to the Selection list. To add some parameter, select it in the Available list and click the  button. To add all available parameters, click the  button. If needed, you can remove any selected parameters any time you like. To remove a parameter, select it in the Selection list and click the  button. To remove all selected parameters, click the  button.
  7. Click Next to go to the Charts page of the wizard. Here you choose the charts to compare the simulation output. Choose the type of the chart in the Type cell (available types are: dataset charts for datasets and bar charts for scalar values). Specify the Chart Title in the cell to the right. Finally, specify the expression evaluating the value to be displayed on the chart in the Expression cell. Use root here to refer to the top level agent, e.g. root.myDataset
  8. For instance, you have Bass Diffusion model and want to run the model for several times with different parameters and compare the dynamics of product adoption for these runs. Let's assume you store number of adopters history in the dataset defined in the root active object, say adoptersDS. To obtain a chart displaying adoption process dynamics for multiple runs with different parameter values, on the Charts page of the New Experiment - Compare Runs wizard you need to choose Dataset in the Type cell of the table, type your Title, say Number of adopters, and specify the name of the root object's dataset you want to be displayed on the chart in the Expression cell: root.adoptersDS
  9. Click Finish to complete the creation process.
  10. You will see the default experiment UI generated. The left part of the presentation window contains controls for changing parameter values between model runs. The right part contains charts displaying the simulation output for multiple model runs. 
  11. Run the experiment to check how it works. The presentation window will appear. It allows you to perform multiple model runs (each one is started when you click the Perform Model Run button). Between model runs you can change the parameter values using the controls. Type of control for each parameter is set according to the parameter editor settings (in the Value editor section of the parameter's Properties view). Chart(s) to the right show you the simulation output. You can highlight output of some particular model run by clicking on the corresponding item in the chart's legend. This will highlight the corresponding curve in the chart and show the values of the parameters, that were used to obtain this result, in the controls.

Properties

General

Name – The name of the experiment.
 Since AnyLogic generates Java class for each experiment, please follow Java naming guidelines and start the name with an uppercase letter. 

Ignore – If selected, the experiment is excluded from the model.

Top-level agent – Using the drop-down list, choose the top-level agent type for the experiment. The agent of this type will play a role of a root for the hierarchical tree of agents in your model.

Maximum available memory – The maximum size of Java heap allocated for the model.

Parameters

Parameters – Here the user can define actual values of parameters of the top-level agent. 

Paste from clipboard – Use this button to paste parameter values from Clipboard to the fields above (the values should be already copied and stored to the Clipboard at the moment).

Model time

Stop – Defines, whether the model will Stop at specified timeStop at specified date, or it will Never stop. In the first two cases, the stop time is specified using the Stop time/Stop date controls.

Start time – The initial time for the simulation time horizon.

Start date – The initial calendar date for the simulation time horizon.

Stop time – The final time for the simulation time horizon (the number of model time units for model to run before it will be stopped).

Stop date – The initial calendar date for the simulation time horizon.

Randomness

Random number generator –  Here you specify, whether you want to initialize random number generator for this model randomly or with some fixed seed. This makes sense for stochastic models. Stochastic models require a random seed value for the pseudorandom number generator. In this case model runs cannot be reproduced since the model random number generator is initialized with different values for each model run. Specifying the fixed seed value, you initialize the model random number generator with the same value for each model run, thus the model runs are reproducible. Moreover, here you can substitute AnyLogic default RNG with your own RNG.

Random seed (unique simulation runs) – If selected, the seed value of the random number generator is random. In this case random number generator is initialized with the same value for each model run, and the model runs are unique (non-reproducible).

Fixed seed (reproducible simulation runs) – If selected, the seed value of the random number generator is fixed (specify it in the Seed value field). In this case random number generator is initialized with the same value for each model run, and the model runs are reproducible.

Custom generator (subclass of Random) – If for any reason you are not satisfied with the quality of the default random number generator Random, you can substitute it with your own one. Just prepare your custom RNG (it should be a subclass of the Java class Random, e.g. MyRandom), choose this particular option and type the expression returning an instance of your RNG in the field on the right, for example: new MyRandom() or new MyRandom( 1234 ) You can find more information here.

Selection mode for simultaneous events – Here you can choose the order of execution for simultaneous events (that occur at the same moment of model time). Choose from:

Window

Window properties define the appearance of the presentation window, that will be shown, when the user starts the experiment. Note that the size of the experiment window is defined using the model frame and applies to all experiments and agent types of the model.

Title – The title of the presentation window.

Enable zoom and panning – If selected, the user will be allowed to pan and zoom the presentation window.

Maximized size – If selected, the presentation window will be maximized on model launch.

Close confirmation – If selected, the dialog box asking for confirmation will be shown on closing the model window. This may prevent the user from closing the window by occasional clicking the window's close button.

Show Toolbar sections properties section defines, what sections of the toolbar of the presentation window are visible. To make some toolbar section visible, just select the corresponding checkbox.

Show Statusbar sections properties section defines, what sections of the status bar of the presentation window are visible. To make some status bar section visible, just select the corresponding checkbox.

Java actions

Initial experiment setup – The code executed on experiment setup.

Before each experiment run – The code executed before each simulation run.

Before simulation run– The code executed before simulation run. This code is run on setup of the model. At this moment the top-level agent of the model is already created, but the model is not started yet. You may perform here some actions with elements of the top-level agent, e.g assign actual parameter values here.

After simulation run – The code executed after simulation run. This code is executed when simulation engine finishes the model execution (Engine.finished() function is called). This code is not executed when you stop your model by clicking the Terminate execution button.

Advanced Java

Imports sectionimport statements needed for correct compilation of the experiment class' code. When Java code is generated, these statements are inserted before definition of the Java class.

Additional class code – Arbitrary member variables, nested classes, constants and methods are defined here. This code will be inserted into the experiment class definition. You can access these class data members anywhere within this experiment. 

Java machine arguments – Specify here Java machine arguments you want to apply on launching your model. You can find the detailed information on possible arguments at Java Sun Microsystems web site: http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html

Command-line arguments – Here you can specify command line arguments you want to pass to your model. You can get the values of passed argument values in the experiment's Additional class code using the method String[] getCommandLineArguments()

Advanced

Load top-level agent from snapshot – If selected, the experiment will load the model state from the snapshot file specified in the control to the right. The experiment will be started from the time when the model state was saved.