How-To video: Compare Runs experiment
This is an interactive experiment that allows you to input the model parameters, run simulation, and add the simulation output to the charts where they can be compared with the results of other runs.
The default UI for this experiment includes the input fields and the output charts. You can choose a particular output result, click on its chart, and display the corresponding parameter values.
You can control the Compare runs experiment with Java code. Refer to the Compare runs experiment functions section for details.
Demo model: Compare Runs Experiment
– The name of the experiment.
Since AnyLogic generates Java class for each experiment, please follow Java naming guidelines and start the name with an uppercase letter.
Ignore – If selected, the experiment is excluded from the model.
Top-level agent – Using the drop-down list, choose the top-level agent type for the experiment. The agent of this type will play a role of a root for the hierarchical tree of agents in your model.
– The maximum
size of Java heap allocated for the model.
Parameters – Here the user can define actual values of parameters of the top-level agent.
Paste from clipboard – Use this button to paste parameter values from Clipboard to the fields above (the values should be already copied and stored to the Clipboard at the moment).
Stop – Defines, whether the model will Stop at specified time, Stop at specified date, or it will Never stop. In the first two cases, the stop time is specified using the Stop time/Stop date controls.
Start time – The initial time for the simulation time horizon.
Start date – The initial calendar date for the simulation time horizon.
Stop time – The final time for the simulation time horizon (the number of model time units for model to run before it will be stopped).
Stop date – The initial calendar date for the simulation time horizon.
Random number generator – Here you specify, whether you want to initialize random number generator for this model randomly or with some fixed seed. This makes sense for stochastic models. Stochastic models require a random seed value for the pseudorandom number generator. In this case model runs cannot be reproduced since the model random number generator is initialized with different values for each model run. Specifying the fixed seed value, you initialize the model random number generator with the same value for each model run, thus the model runs are reproducible. Moreover, here you can substitute AnyLogic default RNG with your own RNG.
Random seed (unique simulation runs) – If selected, the seed value of the random number generator is random. In this case random number generator is initialized with the same value for each model run, and the model runs are unique (non-reproducible).
Fixed seed (reproducible simulation runs) – If selected, the seed value of the random number generator is fixed (specify it in the Seed value field). In this case random number generator is initialized with the same value for each model run, and the model runs are reproducible.
Custom generator (subclass of Random) – If for any reason you are not satisfied with the quality of the default random number generator Random, you can substitute it with your own one. Just prepare your custom RNG (it should be a subclass of the Java class Random, e.g. MyRandom), choose this particular option and type the expression returning an instance of your RNG in the field on the right, for example: new MyRandom() or new MyRandom( 1234 ) You can find more information here.
mode for simultaneous events
– Here you can choose the order of execution for
simultaneous events (that occur at the same moment of
model time). Choose from:
Window properties define the appearance of the presentation window, that will be shown, when the user starts the experiment. Note that the size of the experiment window is defined using the model frame and applies to all experiments and agent types of the model.
Title – The title of the presentation window.
Enable zoom and panning – If selected, the user will be allowed to pan and zoom the presentation window.
Maximized size – If selected, the presentation window will be maximized on model launch.
Close confirmation – If selected, the dialog box asking for confirmation will be shown on closing the model window. This may prevent the user from closing the window by occasional clicking the window's close button.
Show Toolbar sections properties section defines, what sections of the toolbar of the presentation window are visible. To make some toolbar section visible, just select the corresponding checkbox.
Show Statusbar sections properties section defines, what sections of the status bar of the presentation window are visible. To make some status bar section visible, just select the corresponding checkbox.
Initial experiment setup – The code executed on experiment setup.
Before each experiment run – The code executed before each simulation run.
Before simulation run– The code executed before simulation run. This code is run on setup of the model. At this moment the top-level agent of the model is already created, but the model is not started yet. You may perform here some actions with elements of the top-level agent, e.g assign actual parameter values here.
After simulation run – The code executed after simulation run. This code is executed when simulation engine finishes the model execution (Engine.finished() function is called). This code is not executed when you stop your model by clicking the Terminate execution button.
Imports section – import statements needed for correct compilation of the experiment class' code. When Java code is generated, these statements are inserted before definition of the Java class.
Additional class code – Arbitrary member variables, nested classes, constants and methods are defined here. This code will be inserted into the experiment class definition. You can access these class data members anywhere within this experiment.
Java machine arguments – Specify here Java machine arguments you want to apply on launching your model. You can find the detailed information on possible arguments at Java Sun Microsystems web site: http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html
Command-line arguments – Here you can specify command line arguments you want to pass to your model. You can get the values of passed argument values in the experiment's Additional class code using the method String getCommandLineArguments()
Load top-level agent from snapshot – If selected, the experiment will load the model state from the snapshot file specified in the control to the right. The experiment will be started from the time when the model state was saved.