Compare Runs Experiment

How-To video: Compare Runs experiment

This is an interactive experiment that allows you to input the model parameters, run simulation, and add the simulation output to the charts where they can be compared with the results of other runs.

The default UI for this experiment includes the input fields and the output charts. You can choose a particular output result, click on its chart, and display the corresponding parameter values.

You can control the Compare runs experiment with Java code. Refer to the functions section for details.

Demo model: Compare Runs Experiment

Creating a compare runs experiment

To create a compare runs experiment
  1. In the Projects view, right-click (Mac OS: Ctrl click) the model item and choose New > Experiment from the popup menu. The New Experiment dialog box is displayed.
  1. Choose Compare Runs option in the Experiment Type list.
  2. Type the experiment name in the Name edit box.
  3. Choose the top-level agent of the experiment from the Top-level agent drop-down list.
  4. If you want to apply model time settings from another experiment, leave the Copy model time settings from check box selected and choose the experiment in the drop-down list to the right.
  5. Click Next when finished. The Parameters page of the wizard will open. Here you specify the parameters of the top level agent, whose values you will be able to change between experiment runs.
  6. Add the parameters you need from the Available list to the Selection list. To add some parameter, select it in the Available list and click the button. To add all available parameters, click the button. If needed, you can remove any selected parameters any time you like. To remove a parameter, select it in the Selection list and click the button. To remove all selected parameters, click the button.
  7. Click Next to go to the Charts page of the wizard. Here you choose the charts to compare the simulation output. Choose the type of the chart in the Type cell (available types are: dataset charts for datasets and bar charts for scalar values). Specify the Chart Title in the cell to the right. Finally, specify the expression evaluating the value to be displayed on the chart in the Expression cell. Use root here to refer to the top level agent, e.g. root.myDataset.
  8. For instance, you have Bass Diffusion model and want to run the model for several times with different parameters and compare the dynamics of product adoption for these runs. Let's assume you store number of adopters history in the dataset defined in the root active object, say adoptersDS. To obtain a chart displaying adoption process dynamics for multiple runs with different parameter values, on the Charts page of the New Experiment - Compare Runs wizard you need to choose Dataset in the Type cell of the table, type your Title, say Number of adopters, and specify the name of the root object's dataset you want to be displayed on the chart in the Expression cell: root.adoptersDS.
  9. Click Finish to complete the creation process.
  10. You will see the default experiment UI generated. The left part of the model window contains controls for changing parameter values between model runs. The right part contains charts displaying the simulation output for multiple model runs.
  11. Run the experiment to check how it works. The model window will appear. It allows you to perform multiple model runs (each one is started when you click the Run button in the control panel at the bottom of the model window). Between model runs you can change the parameter values using the controls. Type of control for each parameter is set according to the parameter editor settings (in the Value editor section of the parameter's Properties view). Chart(s) to the right show you the simulation output. You can highlight output of some particular model run by clicking on the corresponding item in the chart's legend. This will highlight the corresponding curve in the chart and show the values of the parameters, that were used to obtain this result, in the controls.

Properties

General

Name – The name of the experiment.
Since AnyLogic generates Java class for each experiment, please follow Java naming guidelines and start the name with an uppercase letter.

Ignore – If selected, the experiment is excluded from the model.

Top-level agent – Using the drop-down list, choose the top-level agent type for the experiment. The agent of this type will play a role of a root for the hierarchical tree of agents in your model.

Maximum available memory – The maximum size of Java heap allocated for the model.

Parameters

Parameters – Here the user can define actual values of parameters of the top-level agent.

Paste from clipboard – Use this button to paste parameter values from Clipboard to the fields above (the values should be already copied and stored to the Clipboard at the moment).

Model time

Stop – Defines, whether the model will Stop at specified time, Stop at specified date, or it will Never stop. In the first two cases, the stop time is specified using the Stop time/Stop date controls.

Start time – The initial time for the simulation time horizon.

Start date – The initial calendar date for the simulation time horizon.

Stop time – The final time for the simulation time horizon (the number of model time units for model to run before it will be stopped).

Stop date – The initial calendar date for the simulation time horizon.

Randomness

Random number generator – Here you specify, whether you want to initialize random number generator for this model randomly or with some fixed seed. This makes sense for stochastic models. Stochastic models require a random seed value for the pseudorandom number generator. In this case model runs cannot be reproduced since the model random number generator is initialized with different values for each model run. Specifying the fixed seed value, you initialize the model random number generator with the same value for each model run, thus the model runs are reproducible. Moreover, here you can substitute AnyLogic default RNG with your own RNG.

Random seed (unique simulation runs) – If selected, the seed value of the random number generator is random. In this case random number generator is initialized with the same value for each model run, and the model runs are unique (non-reproducible).

Fixed seed (reproducible simulation runs) – If selected, the seed value of the random number generator is fixed (specify it in the Seed value field). In this case random number generator is initialized with the same value for each model run, and the model runs are reproducible.

Custom generator (subclass of Random) – If for any reason you are not satisfied with the quality of the default random number generator Random, you can substitute it with your own one. Just prepare your custom RNG (it should be a subclass of the Java class Random, e.g. MyRandom), choose this particular option and type the expression returning an instance of your RNG in the field on the right, for example: new MyRandom() or new MyRandom( 1234 ) You can find more information here.

Window

Window properties define the appearance of the model window, that will be shown, when the user starts the experiment. Note that the size of the experiment window is defined using the model frame and applies to all experiments and agent types of the model.

Title – The title of the model window.

Enable zoom and panning – If selected, the user will be allowed to pan and zoom the model window.

Enable developer panel – Select/clear the checkbox to enable/disable the developer panel in the model window.

Show developer panel on start – [Enabled only if the Enable developer panel checkbox is selected] If selected, the developer panel will be shown by default in the model window every time you run the experiment.

Java actions

Initial experiment setup – The code executed on experiment setup.

Before each experiment run – The code executed before each simulation run.

Before simulation run– The code executed before simulation run. This code is run on setup of the model. At this moment the top-level agent of the model is already created, but the model is not started yet. You may perform here some actions with elements of the top-level agent, e.g assign actual parameter values here.

After simulation run – The code executed after simulation run. This code is executed when simulation engine finishes the model execution (Engine.finished() function is called). This code is not executed when you stop your model by clicking the Terminate execution button.

Advanced Java

Imports sectionimport statements needed for correct compilation of the experiment class' code. When Java code is generated, these statements are inserted before definition of the Java class.

Additional class code – Arbitrary member variables, nested classes, constants, and methods are defined here. This code will be inserted into the experiment class definition. You can access these class data members anywhere within this experiment.

Java machine arguments – Specify here Java machine arguments you want to apply on launching your model. You can find the detailed information on possible arguments at Java Sun Microsystems web site: http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html

Command-line arguments – Here you can specify command line arguments you want to pass to your model. You can get the values of passed argument values in the experiment's Additional class code using the method String[] getCommandLineArguments()

Advanced

Load top-level agent from snapshot – If selected, the experiment will load the model state from the snapshot file specified in the control to the right. The experiment will be started from the time when the model state was saved.

Functions

You can use the following functions to control the experiment, retrieve the data on its execution status and use it as a framework for creating custom experiment UI.

Controlling execution

Function

Description

void run()

Starts the experiment execution from the current state.

If the model does not exist yet, the function resets the experiment, creates, and starts the model.

void pause()

Pauses the experiment execution.

void step()

Performs one step of experiment execution.

If the model does not exist yet, the function resets the experiment, creates, and starts the model.

void stop()

Terminates the experiment execution.

void finish()

Sets a flag that, when tested by the engine, causes it to finish after completing the current event execution.

void close()

This function returns immediately and performs the following actions in a separate thread:

  • Stops experiment if it is not stopped,

  • Destroys the model,

  • Closes the experiment window (only if model is started in the application mode).

Experiment.State getState()

Returns the current state of the experiment: IDLE, PAUSED, RUNNING, FINISHED, ERROR, or PLEASE_WAIT.

double getRunTimeSeconds()

Returns the duration of the experiment execution in seconds, excluding pause times.

int getRunCount()

Returns the number of the current simulation run, i.e., the number of times the model was destroyed.

double getProgress()

Returns the progress of the experiment: a number between 0 and 1 corresponding to the currently completed part of the experiment (a proportion of completed iterations of the total number of iterations), or -1 if the progress cannot be calculated.


Accessing the model

Function

Description

Engine getEngine()

Returns the engine executing the model. To access the model's top-level agent (typically, Main), call getEngine().getRoot();

IExperimentHost getExperimentHost()

Returns the experiment host object of the model, or some dummy object without functionality if the host object does not exist.


Restoring the model state from snapshot

Function

Description

void setLoadRootFromSnapshot(
String snapshotFileName)

Tells the experiment to load the top-level agent from AnyLogic snapshot file. This function is only available in AnyLogic Professional.

Parameter:
snapshotFileName - the name of the AnyLogic snapshot file, for example:
"C:\\My Model.als"

boolean isLoadRootFromSnapshot()

Returns true if the experiment is configured to start the simulation from the state loaded from the snapshot file; returns false otherwise.

String getSnapshotFileName()

Returns the name of the snapshot file, from which this experiment is configured to start the simulation.


Error handling

Function

Description

RuntimeException
error(Throwable cause, String errorText)

Signals an error during the model run by throwing a RuntimeException with errorText preceded by the agent full name.

This function never returns, it throws runtime exception by itself. The return type is defined for the cases when you would like to use the following form of call:
throw error("my message")
;

Parameters:
cause - the cause (which will be saved for more detailed message), may be null.
errorText - the text describing the error that will be displayed.

RuntimeException
errorInModel(Throwable cause, String errorText)

Signals a model logic error during the model run by throwing a ModelException with specified error text preceded by the agent full name.

This function never returns, it throws runtime exception by itself. The return type is defined for the cases when you would like to use the following form of call:
throw errorInModel("my message")
;

This function differs from error() in the way of displaying error message: model logic errors are 'softer' than other errors, they use to happen in the models and signal the modeler that model might need some parameters adjustments.

Examples are 'agent was unable to leave flowchart block because subsequent block was busy', 'insufficient capacity of pallet rack' etc.

Parameters:
cause - the cause (which will be saved for more detailed message), may be null.
errorText - the text describing the error that will be displayed.

void onError
(Throwable error)

This function may be overridden to perform custom handling of the errors that occurred during the model execution (i.e., errors in the action code of events, dynamic events, transitions, entry/exit codes of states, formulas, etc.).

By default, this function does nothing as its definition is empty. To override it, you can add a function to the experiment, name it onError and define a single argument of the java.lang.Throwable for it.

Parameter:
error - an error which has occurred during event execution.

void onError
(Throwable error, Agent root)

Similar to onError(Throwable error) function except that it provides one more argument to access the top-level (root) agent of the model.

Parameters:
error - an error which has occurred during event execution.
root - the top-level (root) agent of the model. Useful for experiments with multiple runs executed in parallel. May be null in some cases (e.g. on errors during top-level agent creation).


Command-line arguments

Function

Description

String[] getCommandLineArguments()

Returns an array of command-line arguments passed to this experiment on model start. Never returns null: if no arguments are passed, an empty array is returned.

You can call this function in the experiment's Additional class code.