An experiment stores a set of configuration parameters of a model. AnyLogic supports several types of experiments meant for different simulation tasks.
When a new project is created, one experiment is created automatically. It’s a simulation experiment named Simulation. It runs model simulation with animation displayed and model debugging enabled. Simulation experiment is used in most cases. Other AnyLogic experiments are used only when the model parameters play a significant role and you need to analyze how they affect the model behavior, or when you want to find optimal parameters of your model.
AnyLogic supports the following types of experiments:
Please note that Compare Runs, Monte Carlo, Sensitivity Analysis, Calibration and Custom experiments are available in AnyLogic Professional and University Researcher only.
Simulation experiment runs model simulation with animation displayed and model debugging enabled. It is used in most cases. Other AnyLogic experiments are used only when the model parameters play a significant role, or when you need to configure a complex simulation comprising several simple model runs.
Parameters variation experiment performs the complex model simulation comprising several single model runs varying one or more root object parameters. Using this experiment you can compare the behavior of model with different parameter values and analyze how some certain parameters affect the model behavior. Running this experiment with fixed parameter values allows to estimate the influence of stochastic processes in your model.
Optimization experiment finds the optimal combination of parameters that results in the best possible solution. Using the optimization experiment you can observe system behavior under certain conditions, as well as improve system performance.
Monte
Carlo
experiment obtains and displays a collection of simulation outputs for
a stochastic model or for a model with stochastically varied
parameter(s). Both regular and 2D histograms may be used.
Compare Runs
experiment enables
you to interactively input different parameter values and run the model
multiple times. It visually compares outputs of simulation runs in both
scalar and dataset forms.
Sensitivity Analysis
experiment
runs the model multiple times varying one of the parameters and shows
how the simulation output depends on it.
Calibration
experiment uses
optimizer to find the model parameter values that correspond to the
simulation output best fitting with the given data. The data may be
both in scalar and dataset form. Coefficients may be used in case of
multiple criteria. The calibration progress and fitting of each
criterion are displayed.
RL experiment is used to prepare the AnyLogic model for the reinforcement learning (RL) process of the AI agent and to upload it onto appropriate platforms.
Custom experiment runs experiment with custom scenario entirely written by user. Custom experiment gives you maximum flexibility with setting parameters, managing simulation runs, making decisions. It simply gives you a code field where you can do all that (and a lot more) by using a rich Java API of AnyLogic engine (functions like run(), stop(), etc.). This experiment has no built-in graphical interface as well as no predefined behavior.