Optimization experiment

If you need to run a simulation and observe system behavior under certain conditions, as well as improve system performance, for example, by making decisions about system parameters and/or structure, you can use the optimization capability of AnyLogic. Optimization is the process of finding the optimal combination of conditions resulting in the best possible solution. Optimization can help you find, for example, the optimal performance of a server or the best method for processing bills.

AnyLogic optimization is built on top of the OptQuest Optimization Engine, one of the most flexible and user-friendly optimization tools on the market. The OptQuest Engine automatically finds the best parameters of a model, with respect to certain constraints. AnyLogic provides a convenient graphical user interface to set up and control the optimization.

How-To video: Optimization experiment

OptQuest is a registered trademark of OptTek Systems, Inc. For advanced information about the OptQuest Engine, please visit OptTek’s web site www.opttek.com.

AnyLogic enables exporting models with optimization experiments as standalone applications.

The optimization process consists of repetitive simulations of a model with different parameters. Using sophisticated algorithms, the OptQuest Engine varies controllable parameters from simulation to simulation to find the optimal parameters for solving a problem.

You can control the optimization experiment with Java code.

To optimize your model
  1. Create new optimization experiment.
  2. Define optimization parameters (parameters to be varied).
  3. Create experiment UI.
  4. Specify the function to be minimized or maximized (the objective function).
  5. Define constraints and requirements to be met (optional).
  6. Specify the simulation stop condition.
  7. Specify the optimization stop condition.
  8. Run the optimization.

The last step of System dynamics tutorial explains how to create and run optimization step-by-step.

To create an optimization experiment
  1. In the Projects view, right-click (Mac OS: Ctrl click) the model item and choose New > Experiment from the popup menu. The New Experiment dialog box is displayed.
  2. Choose Optimization option from the Experiment Type list.
  3. Type the experiment name in the Name edit box.
  4. Choose the top-level agent of the experiment from the Top-level agent drop-down list.
  5. If you want to apply model time settings from another experiment, leave the Copy model time settings from check box selected and choose the experiment in the drop-down list to the right.
  6. Click Finish.

To increase your optimization experiment performance, check out some tips that we have for you.

Properties

General

Name – The name of the experiment.
Since AnyLogic generates Java class for each experiment, please follow Java naming guidelines and start the name with an uppercase letter.

Ignore – If selected, the experiment is excluded from the model.

Top-level agent – Using the drop-down list, choose the top-level agent type for the experiment. The agent of this type will play a role of a root for the hierarchical tree of agents in your model.

Objective – The objective function you want to minimize or maximize.The top level agent is accessible here as root.

Number of iterations – If selected, optimization will be stopped, if the maximum number of simulations, specified in the field to the right, is exceeded.

Automatic stop – If selected, optimization will be stopped, if the value of the objective function stops improving significantly (this option is named optimization autostop).

Maximum available memory – The maximum size of Java heap allocated for the model.

Create default UI – The button creates the default UI for the experiment.
Please do not press this button since it will delete the experiment UI created by the wizard and will create the default UI for optimization experiment that may not correspond to your task.

Parameters

Parameters – Here the user defines the set of optimization parameters (AKA decision variables). The table lists all the parameters of the top level agent. To make a parameter a decision variable, click in the Type field and choose the type of the optimization parameter other than fixed. Depending on the type of the parameter, the list of possible values may vary: design, int, discrete for integer parameters; continuous and discrete for double, etc. Then specify the range for the parameter. Enter the parameter’s lower bound in the Min field and the parameter’s upper bound in the Max field. For discrete and design parameters, specify the increment value in the Step field.

Model time

Stop – Defines, whether the model will Stop at specified time, Stop at specified date, or it will Never stop. In the first two cases, the stop time is specified using the Stop time/Stop date controls.

Start time – The initial time for the simulation time horizon.

Start date – The initial calendar date for the simulation time horizon.

Stop time – [Enabled if Stop is set to Stop at specified time] The final time for the simulation time horizon (the number of model time units for model to run before it will be stopped).

Stop date – [Enabled if Stop is set to Stop at specified date] The initial calendar date for the simulation time horizon.

Additional optimization stop conditions – Here you can define any number of additional optimization stop conditions. When any of these conditions will become true, optimization will be stopped. Condition can include checks of dataset mean confidence, variable values, etc. The top-level agent of the experiment can be accessed here as root, so if you want e.g. to stop the optimization when the variable var of the experiment's top-level agent steps over the threshold, type here, say, root.var>11. To make the condition active, select the checkbox in the corresponding row of the table.

Constraints

Here the user can define constraints - additional restrictions imposed on the optimization parameters.

Constraints on simulation parameters (are tested before a simulation run) – The table defining the optimization constraints. A constraint is a condition defined upon optimization parameters. It defines a range for an optimization parameter. Each time the optimization engine generates a new set of values for the optimization parameters, it creates feasible solutions, satisfying this constraint; thus the space of searching is reduced, and the optimization is performed faster.
A constraint is a well-formed arithmetic expression describing a relationship between the optimization parameters. It always defines a limitation by specifying a lower or an upper bound, e.g. parameter1 >= 10.
Each constraint is defined in individual row of the table, and can be disabled by deselecting the corresponding checkbox in the first column.

Requirements

Here the user can requirements - additional restrictions imposed on the solutions found by the optimization engine.

Requirements (are tested after a simulation run to determine whether the solution is feasible) – The table defining the optimization requirements. A requirement is an additional restriction imposed on the solution found by the optimization engine. Requirements are checked at the end of each simulation, and if they are not met, the current parameter values are rejected.
A requirement can also be a restriction on a response that requires its value to fall within a specified range. It may contain any variables, parameters, functions, etc. of the experiment's top level agent accessible in the expression field as root.
Each requirement is defined in individual row of the table, and can be disabled by deselecting the corresponding checkbox in the first column.

Randomness

Random number generator – Here you specify, whether you want to initialize random number generator for this model randomly or with some fixed seed. This makes sense for stochastic models. Stochastic models require a random seed value for the pseudorandom number generator. In this case model runs cannot be reproduced since the model random number generator is initialized with different values for each model run. Specifying the fixed seed value, you initialize the model random number generator with the same value for each model run, thus the model runs are reproducible. Moreover, here you can substitute AnyLogic default RNG with your own RNG.

Random seed (unique simulation runs) – If selected, the seed value of the random number generator is random. In this case random number generator is initialized with the same value for each model run, and the model runs are unique (non-reproducible).

Fixed seed (reproducible simulation runs) – If selected, the seed value of the random number generator is fixed (specify it in the Seed value field). In this case random number generator is initialized with the same value for each model run, and the model runs are reproducible.

Custom generator (subclass of Random) – If for any reason you are not satisfied with the quality of the default random number generator Random, you can substitute it with your own one. Just prepare your custom RNG (it should be a subclass of the Java class Random, e.g. MyRandom), choose this particular option and type the expression returning an instance of your RNG in the field on the right, for example: new MyRandom() or new MyRandom( 1234 ) You can find more information here.

Replications

Use replications – if selected, the OptQuest Engine will run several replications per one simulation.

Fixed number of replications – if selected, fixed number of replications will be run per each simulation.

Replications per iteration – [enabled if Fixed number of replications is set] the fixed number of replications, which will be run per each simulation.

Varying number of replications (stop replications after minimum replications, when confidence level is reached) – if selected, varying number of replications will be run per each simulation. When running a varying number of replications, you will specify minimum and maximum number of replications to be run. The OptQuest Engine will always run the minimum number of replications for a solution. OptQuest then determines if more replications are needed. The OptQuest Engines stops evaluating a solution when one of the following occurs:

Minimum replications – [enabled if Varying number of replications is set] – the minimum number of replications the OptQuest Engine will always run per one simulation.

Maximum replications – [enabled if Varying number of replications is set] – the maximum number of replications the OptQuest Engine can run per one simulation.

Confidence level – [enabled if Varying number of replications is set] – the confidence level to be evaluated for the objective.

Error percent – [enabled if Varying number of replications is set] – the percent of the objective for which the confidence level is determined.

Window

Window properties define the appearance of the model window that will be shown, when the user starts the experiment. Note that the size of the experiment window is defined using the model frame and applies to all experiments and agent types of the model.

Title – The title of the model window.

Enable zoom and panning – If selected, the user will be allowed to pan and zoom the model window.

Enable developer panel – Select/clear the checkbox to enable/disable the developer panel in the model window.

Show developer panel on start – [Enabled only if the Enable developer panel checkbox is selected] If selected, the developer panel will be shown by default in the model window every time you run the experiment.

Java actions

Initial experiment setup – The code executed on experiment setup.

Before each experiment run – The code executed before each simulation run.

Before simulation run – The code executed before simulation run. This code is run on setup of the model. At this moment the top-level agent of the model is already created, but the model is not started yet. You may perform here some actions with elements of the top-level agent, e.g assign actual parameter values here.

After simulation run – The code executed after simulation run. This code is executed when simulation engine finishes the model execution (Engine.finished() function is called). This code is not executed when you stop your model by clicking the Terminate execution button.

After iteration – The code executed after iteration run.

After experiment – The code executed after experiment run.

Advanced Java

Imports sectionimport statements needed for correct compilation of the experiment class' code. When Java code is generated, these statements are inserted before definition of the Java class.

Additional class code – Arbitrary member variables, nested classes, constants and methods are defined here. This code will be inserted into the experiment class definition. You can access these class data members anywhere within this experiment.

Java machine arguments – Specify here Java machine arguments you want to apply on launching your model. You can find the detailed information on possible arguments at Java Sun Microsystems web site: http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/java.html

Command-line arguments – Here you can specify command line arguments you want to pass to your model. You can get the values of passed argument values in the experiment's Additional class code using the method String[] getCommandLineArguments()

Advanced

Allow parallel evaluations – If the option is selected and the processor has several cores, AnyLogic will run several experiment iterations in parallel on different processor cores. Thereby performance is multiply increased and the experiment is performed significantly quicker. This feature is made controllable because in some rare cases parallel evaluations may affect the optimizer strategy so that more iterations are required to find the optimal solution.
Do not use static variables, collections, table functions and custom distributions (check that their advanced option Static is deselected), if you turn on parallel evaluations here.

Load top-level agent from snapshot – If selected, the experiment will load the model state from the snapshot file specified in the control to the right. The experiment will be started from the time when the model state was saved.

Defining an objective function

The goal of the optimization process is to find the parameter values that result in a maximum or minimum of a function called the objective function. Objective function is a mathematical expression describing the relationship of the optimization parameters or the result of an operation (such as simulation) that uses the optimization parameters as inputs. The optimization objective is the objective function plus optimization criterion. The latter determines whether the goal of the optimization is to minimize or maximize the value of the objective function.

To define the objective function
  1. In the Projects view, click the optimization experiment.
  2. In the Properties view, specify the objective function in the Objective edit box. You can enter any Java expression as an objective function. Since the expression is considered to be in context of the top-level agent, it can access variables and parameters of the top-level agent. The top-level agent is accessible here as root. If your algorithm is rather sophisticated, you can define a function in the top-level agent (e.g. Main), and place the function call in the Objective edit box. Example: root.totalCost().
  3. Define the optimization criterion. Choose Minimize/Maximize option to minimize/maximize your objective function.
The OptQuest Engine obtains a sample of the objective function at the end of each simulation. The engine analyzes a sample, modifies optimization parameters according to its optimization algorithm, and starts a new simulation.

Therefore, optimization is an iterative process where:

Optimization Parameters

An optimization parameter (or a decision variable, in the terms of optimization) is a model parameter to be optimized. For example, the number of nurses to employ during the morning shift in an emergency room may be an optimization parameter in a model of a hospital. The OptQuest Engine searches through possible values of optimization parameters to find optimal parameters. It is possible to have more than one optimization parameter.

Only a parameter of the top level agent of the optimization experiment can be an optimization parameter. So, in order to perform optimization, you must have at least one parameter in this agent. If you need to optimize parameters of embedded agents, you should use parameter propagation.

The dimension of the search area depends on the number of optimization parameters. Each new parameter expands the search area, thus slowing down the optimization. If you have N optimization parameters, their ranges form the N-dimensional square search area. Obviously, that area must be wide enough to contain the optimal point. However, the wider the range is, the more time is needed to find the optimum in the search area. On the other hand, suggested parameter values located near the optimal value can shorten the time it takes to find the optimal solution.

Optimization Parameter Types

During the optimization process, the parameter's value is changed in accordance with its type within an interval specified by lower and upper bounds. There are the following types of optimization parameters:

Continuous parameter can take any value from the interval. The parameter precision determines the minimal value continuous parameters can change.

Discrete parameter is represented by a finite set of decisions with essential direction: the parameter influences the objective like a numeric parameter, but can take values from the specified set only. It begins at a lower bound and increments by a step size up to an upper bound.

Sometimes the range and step are exactly defined by the problem; but generally you will have to choose them. If you specify the step for the parameter, only the discrete points will be involved in the optimization, so it will be impossible to determine optimal parameter value more precisely than defined by the step. So, if you are not sure what the step should be, choose the Continuous rather than the Discrete parameter type.

Design parameter is represented by a finite set of decisions, where there is no clear sense of direction. Value of design parameter represents an alternative but not a quantity. It begins at a lower bound and increments by a step size up to an upper bound. Values order is inconsequentional. Using design parameters you can model choosing the best alternative from the catalog, where the choices are not in a specific order. For example, a design parameter, which can take values 0 or 1 (min=0, max=1, step=1) may represent a choice between: a model has some element or has not.

Defining Optimization Parameters

Optimization parameters are defined in the Parameters section of the optimization experiment properties. The table already lists all parameters of the top level agent. By default, all of them are fixed, i.e. they are not varied by optimization process.

To make parameter a decision variable
  1. Select the optimization experiment in the Projects view.
  2. On the Parameters section of the Properties view, go to the row of the Parameters table containing the parameter you want make a decision variable.
  3. Click the Type field and choose the type of the optimization parameter other than fixed. Depending on the type of the parameter, the list of possible values may vary: design, int, discrete for integer parameters; continuous and discrete for double, etc.
  4. Specify the range for the parameter. Enter the parameter’s lower bound in the Min field and the parameter’s upper bound in the Max field.
  5. For discrete and design parameters, specify the parameter step in the Step field.
  6. Suggest the initial value for the parameter in the Suggested field. Initially, the value is set to the parameter’s default value, but you can enter any other value here.

Optimization experiment functions

You can use the following functions to control the optimization experiment, retrieve the data on its execution status and use it as a framework for creating custom experiment UI.

Controlling execution

Function

Description

void run()

Starts the experiment execution from the current state.

If the model does not exist yet, the function resets the experiment, creates and starts the model.

void pause()

Pauses the experiment execution.

void step()

Performs one step of experiment execution.

If the model does not exist yet, the function resets the experiment, creates and starts the model.

void stop()

Terminates the experiment execution.

void close()

This function returns immediately and performs the following actions in a separate thread:

  • Stops experiment if it is not stopped,

  • Destroys the model,

  • Closes the experiment window (only if the model is started in the application mode).

Experiment.State getState()

Returns the current state of the experiment: IDLE, PAUSED, RUNNING, FINISHED, ERROR, or PLEASE_WAIT.

double getRunTimeSeconds()

Returns the duration of the experiment execution in seconds, excluding pause times.

int getRunCount()

Returns the number of the current simulation run, i.e., the number of times the model was destroyed.

double getProgress()

Returns the progress of the experiment: a number between 0 and 1 corresponding to the currently completed part of the experiment (a proportion of completed iterations of the total number of iterations), or -1 if the progress cannot be calculated.

int getParallelEvaluatorsCount()

Returns the number of parallel evaluators used in this experiment.

On multicore / multiprocessor systems that allow parallel execution this number may be greater than 1.


Objective

Function

Description

double getCurrentObjectiveValue()

Returns the value of the objective function for the current solution.

double getBestObjectiveValue()

Returns the value of the objective function for the optimal currently found solution.

Note that the solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.

double getSelectedNthBestObjectiveValue()

Returns the objective value for the Nth best solution identified by the function selectNthBestSolution(int).


Solution

Function

Description

boolean isBestSolutionFeasible()

Returns true if the optimal solution satisfies all constraints and requirements; returns false otherwise.

boolean isCurrentSolutionBest()

Returns true if the solution is currently the optimal one; returns false otherwise.

boolean isCurrentSolutionFeasible()

Returns true if the current solution satisfies all constraints and requirements; returns false otherwise.

boolean isSelectedNthBestSolutionFeasible()

Returns true if the Nth best solution satisfies all constraints and requirements; returns false otherwise.

void selectNthBestSolution
(int bestSolutionIndex)

This function locates the Nth best solution and sets up the data for subsequent function calls that retrieve specific pieces of information (for example, for the getSelectedNthBestObjectiveValue() and getSelectedNthBestParamValue(COptQuestVariable) functions).

Parameter:
bestSolutionIndex - the index of the optimal solution (passing 1 will locate the optimal solution, 2 - the second optimal, etc.)


Optimization parameters

Function

Description

double getCurrentParamValue
(COptQuestVariable optimizationParameterVariable)

Returns the value of the given optimization parameter variable for the current solution.

double getBestParamValue
(COptQuestVariable optimizationParameterVariable)

Returns the value of the given optimization parameter variable for the optimal currently found solution.

Note that the solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.

double getSelectedNthBestParamValue
(COptQuestVariable optimizationParameterVariable)

Returns the value of the variable for the Nth best solution identified by calling the selectNthBestSolution(int) function.


Iterations

Function

Description

int getCurrentIteration()

Returns the current value of iteration counter.

int getBestIteration()

Returns the iteration that resulted in the optimal currently found solution.

Note that the solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.

int getMaximumIterations()

Returns the total number of iterations.

int getNumberOfCompletedIterations()

Returns the number of completed iterations.

int getSelectedNthBestIteration()

Returns the iteration number for the Nth best solution identified by the selectNthBestSolution(int) function.


Replications

Before calling the optimization experiment functions you may need to ensure that replications are used (call the isUseReplications() function).

Function

Description

boolean isUseReplications()

Returns true if the experiment uses replications; returns false otherwise.

int getCurrentReplication()

Returns the number of replications run so far for the currently evaluated solution.

int getBestReplicationsNumber()

Returns the number of replications that were run to get the optimal solution.

Note that the solution may be infeasible. To check the solution feasibility, call the isBestSolutionFeasible() function.

int getSelectedNthBestReplicationsNumber()

Returns the number of replications for the Nth best solution identified by the selectNthBestSolution(int) function.


Accessing the model

Function

Description

Engine getEngine()

Returns the engine executing the model. To access the model's top-level agent (typically, Main), call getEngine().getRoot();

IExperimentHost getExperimentHost()

Returns the experiment host object of the model, or some dummy object without functionality if the host object does not exist.


Restoring the model state from snapshot

Function

Description

void setLoadRootFromSnapshot(
String snapshotFileName)

Tells the simulation experiment to load the top-level agent from AnyLogic snapshot file. This function is only available in AnyLogic Professional.

Parameter:
snapshotFileName - the name of the AnyLogic snapshot file, for example:
"C:\\My Model.als"

boolean isLoadRootFromSnapshot()

Returns true if the experiment is configured to start the simulation from the state loaded from the snapshot file; returns false otherwise.

String getSnapshotFileName()

Returns the name of the snapshot file, from which this experiment is configured to start the simulation.


Error handling

Function

Description

RuntimeException
error(Throwable cause, String errorText)

Signals an error during the model run by throwing a RuntimeException with errorText preceded by the agent full name.

This function never returns, it throws runtime exception by itself. The return type is defined for the cases when you would like to use the following form of call:
throw error("my message")
;

Parameters:
cause - the cause (which will be saved for more detailed message), may be null.
errorText - the text describing the error that will be displayed.

RuntimeException
errorInModel(Throwable cause, String errorText)

Signals a model logic error during the model run by throwing a ModelException with specified error text preceded by the agent full name.

This function never returns, it throws runtime exception by itself. The return type is defined for the cases when you would like to use the following form of call:
throw errorInModel("my message")
;

This function differs from error() in the way of displaying error message: model logic errors are 'softer' than other errors, they use to happen in the models and signal the modeler that model might need some parameters adjustments.

Examples are 'agent was unable to leave flowchart block because subsequent block was busy', 'insufficient capacity of pallet rack' etc.

Parameters:
cause - the cause (which will be saved for more detailed message), may be null.
errorText - the text describing the error that will be displayed.

void onError
(Throwable error)

This function may be overridden to perform custom handling of the errors that occurred during the model execution (i.e., errors in the action code of events, dynamic events, transitions, entry/exit codes of states, formulas, etc.).

By default, this function does nothing as its definition is empty. To override it, you can add a function to the experiment, name it onError and define a single argument of the java.lang.Throwable for it.

Parameter:
error - an error which has occurred during event execution.

void onError
(Throwable error, Agent root)

Similar to onError(Throwable error) function except that it provides one more argument to access the top-level (root) agent of the model.

Parameters:
error - an error which has occurred during event execution.
root - the top-level (root) agent of the model. Useful for experiments with multiple runs executed in parallel. May be null in some cases (e.g. on errors during top-level agent creation).


Command-line arguments

Function

Description

String[] getCommandLineArguments()

Returns an array of command-line arguments passed to this experiment on model start. Never returns null: if no arguments are passed, an empty array is returned.

You can call this function in the experiment's Additional class code.

Constraints and requirements

AnyLogic supports constraints and requirements - additional restrictions imposed on the optimization parameters and solutions found by the optimization engine.

Constraints

A constraint is a condition defined upon optimization parameters. It defines a range for optimization parameters. Each time the optimization engine generates a new set of values for the optimization parameters, it checks whether these values satisfy the defined constraints. If they do, the engine runs the model with these values. If not, the engine generates another set of values, and so on. Thus the space of searching is reduced, and the optimization is performed faster.

A constraint is a well-formed arithmetic expression describing a relationship between the optimization parameters.

parameterA parameterB 2*parameterC = 10
parameterC - parameterA*parameterB >= 300

To define a constraint

  1. Select the optimization experiment by clicking on it in the Projects view.
  2. In the Constraints section of the Properties view, click the button to the right of the table Constraints on simulation parameters (are tested before a simulation run).
  3. Specify the constraint in the Expression, Type and Bound cells of the row. In the Expression field the top level agent is available as root.

  4. Select the check box in the Enabled column. Otherwise the constraint is eliminated from optimization process and does not affect its results.

Requirements

A requirement is an additional restriction imposed on the solution found by the optimization engine. Requirements are checked at the end of each simulation, and if they are not met, the parameters used are rejected. Otherwise the parameters are accepted.

This means that if all variables meet their constraints at the end of the simulation, the corresponding set of parameter values is considered feasible, and the result of the simulation is accepted by the optimization engine. Otherwise, parameter values are considered infeasible, and the result is rejected.

A requirement can also be a restriction on a response that requires its value to fall within a specified range. Constraint may contain any variables of the agent, the model time symbol t, any arithmetic operations and method calls, such as, e.g., sin(), cos(), sqrt(), etc., or calls to your own methods.

0 <= 2*root.variable – root.statistics.max() <= 500
sqrt(variableC)>=49

To define a requirement

  1. Select the optimization experiment by clicking it in the Projects view.
  2. In the Requirements section of the Properties view, click the button to the right of the table Requirements (are tested after a simulation run to determine whether the solution is feasible).
  3. Specify the requirement in the Expression, Type and Bound cells of the row. In the Expression field the top level agent is available as root.
  4. Select the check box in the Enabled column. Otherwise the requirement is eliminated from optimization process and does not affect its results.

Feasible and infeasible solutions

A feasible solution is one that satisfies all constraints and requirements.

The optimization engine makes finding a feasible solution its highest priority. Once it has found a feasible solution, it concentrates on finding better solutions.

The fact that a particular solution may be infeasible does not imply that the problem itself is infeasible. However, infeasible problems do exist. Here is an example:

parameterA parameterB <= 4
parameterA parameterB >= 5

Clearly, there is no combination that will satisfy both of these constraints.

If a model is constraint-feasible, the optimization engine will always find a feasible solution and search for the optimal solution (i.e., the best solution that satisfies all constraints).

If the optimization engine cannot find any feasible solutions, you should check the constraints for feasibility and fix the inconsistencies of the relationships modeled by the constraints.

Specifying optimization stop condition

To enable optimization, you must ensure that each simulation ends. By default, a simulation never ends; therefore the optimization engine gets no samples of the objective function. To ensure that each simulation ends, you must specify a simulation stop condition. The most common and simple simulation stop conditions is stopping at the specified model time.

In general, a simulation stop condition should be specified in such a way that the value of the objective function is significant when simulation stops. The model should be stable, transient processes should be finished, and statistics should be representative.

Note that a simulation may involve several replications. A sample of the objective function is the result of a simulation/iteration rather than a replication.

Optimization can stop under two circumstances: the maximum number of simulations is exceeded or the value of the objective function stops improving significantly. The latter is also known as automatic stop. You can use any of these conditions to stop an optimization. If more than one condition is specified, optimization stops when the first condition is satisfied.

Set up these settings of an optimization process on the General properties page of the optimization experiment.

To stop optimization after the specified number of iterations

  1. Select the optimization experiment by clicking on it in the Projects view.
  2. Go to the Properties view.
  3. Specify the number of iterations in the Number of iterations edit box. The optimization stops when this number is exceeded.

Be careful when using the Automatic stop option. In the case of an optimization jam, (i.e., when the objective function is changing too slowly), it is possible that optimization will stop long before the real optimal solution is found. If you encounter this problem, suggest other parameter values, or do not use Automatic stop.

To switch optimization autostop on

  1. Select the optimization experiment by clicking on it in the Projects view.
  2. Go to the Properties view.
  3. Select the Automatic Stop check box.
Please note that the number of simulations influences the optimization strategy. If the number of simulations is small, the OptQuest Engine uses an aggressive search strategy to exploit the parameter space. If the number is rather large, the OptQuest Engine uses a more conservative strategy to thoroughly explore the search space.

Default optimization UI

Just as simulation experiment, optimization experiment has its own presentation drawn on the graphical editor diagram. By default, the diagram is empty. However, AnyLogic offers the ability to create some default UI for optimization experiment. We advise you to start with creating the default UI and then adjust it somehow to satisfy your requirements.

Please create the experiment's UI AFTER you have entirely defined the set of optimization parameters in the Parameters table.

To create the default optimization UI

  1. Select the optimization experiment in the Projects view.
  2. Go to the Properties view.
  3. Click the Create default UI button.
    This creates default UI for the optimization experiment as shown in the figure below.


The default UI created for optimization experiment

The default UI consists of a number of controls displaying all necessary information regarding the current status of the optimization.

The table at the left displays all necessary information about the optimization process. The values in the column Current show the user the number of the current Iteration, Parameter values used and the corresponding Objective found to this moment. The values in the column Best form the best solution found up to the current time. Once optimization has finished, this solution is considered to be optimal. The best value of the objective function corresponds to this optimal solution. You can copy it to the Clipboard using the Copy button below.

The chart to the right visually illustrates the optimization process. The X-axis represents simulations, and the Y-axis represents Current Objective, Best Feasible Objective and Best Infeasible Objective found for each simulation.

The optimization experiment is run by clicking the Run button situated on the control panel of the model window.

Running optimization experiment

After you set up the optimization, you are ready to run it. This involves the following steps:

To run an optimization experiment

  1. In the Projects view, right-click (Mac OS: Ctrl click) your optimization experiment and choose Run from the context menu
  2. Alternatively, choose Model > Run from the main menu, or click the arrow to the right of the Run toolbar button and choose the experiment you want to run from the drop-down list.
  3. This opens the model window, displaying experiment's UI. If you have created default UI for the experiment, start the optimization by clicking the Run button in the control panel at the bottom of the model window. The optimization process will be started.

When you have finished finding the optimal solution, you may copy it to simulation experiment of your model. Then you will be able to simulate this model with optimal parameters.

To copy the found optimal parameters to simulation experiment

  1. When you have finished finding the optimal solution, copy the values of parameters corresponding to the best solution, to the Clipboard, by clicking the copy button in the model window.
  2. Paste the optimal parameter values into the simulation experiment by opening the General property page of this simulation experiment and clicking the Paste from Clipboard button at the bottom.

Optimizing stochastic models

If your model is stochastic, the results of the single model run may not be representative. The OptQuest Engine supports objective values that are based on experimentation through the General Replication Algorithm. This feature allows the user to provide the OptQuest Engine with results of multiple replications per simulation/iteration. The OptQuest Engine allows you to run a fixed number of replications per simulation or a varying number of replications per simulation.

When running a varying number of replications, you will specify minimum and maximum number of replications to be run. The OptQuest Engine will always run the minimum number of replications for a solution. OptQuest then determines if more replications are needed. The OptQuest Engines stops evaluating a solution when one of the following occurs:


Optimization experiment's properties view. Replications section

To schedule fixed number of replications
  1. In the Projects view, select the optimization experiment.
  2. In the Replications section of the Properties view, select the Use Replications check box.
  3. Choose the Fixed number of replications option.
  4. Specify the number of Replications per iteration in the edit box.
To schedule varying number of replications
  1. In the Projects view, select the optimization experiment.
  2. In the Replications section of the Properties view, select the Use Replications check box.
  3. Choose the Varying number of replications (Stop replications after minimum replications, when confidence level is reached) option.
  4. Specify the minimum number of replications in the Minimum replications edit box.
  5. Specify the maximum number of replications in the Maximum replications edit box.
  6. Define the Confidence level to be evaluated for the objective.
  7. Specify the percent of objective for which the confidence level is determined in the Error percent field.

Increasing optimization performance

Here are some tips you may find helpful:

  1. The constraints are inconsistent. Check for conflicting constraints, such as x>25, x<24. If variables appearing in the constraints are calculated using some formulas inside the model, the inconsistency can be in those formulas.
  2. The ranges of the optimization parameters conflict with the constraints.

If you find optimization performance unsatisfactory, consider the following recommendations.

Find an optimal solution faster
In general, the optimization process may be very time-consuming, especially if there are multiple parameters. If nothing from the list above helps improve performance, try using a more powerful workstation or schedule more time for optimization.