This section covers various issues you may encounter when administrating Private Cloud and describes how to handle them.
Issues can take place:
To identify the reason for an issue you have encountered, consult Docker logs, logs of individual components, and installer logs.
To collect Docker logs from an individual container running a specific Private Cloud component, you can execute the following command:
docker logs %component name%
If the command output is too extensive, try trimming it to the last 100 rows:
docker logs %component name% --tail 100
You can also find execution logs of Private Cloud service components in the web UI, on the administrator panel.
Some issues may be caused by the front end of the Private Cloud instance — in this case, a browser console output would prove useful.
If a problem occurs with your Private Cloud instance and none of the options described below helps, please contact our support team at email@example.com to find a solution.
The log of the controller service component may report this issue for two primary reasons:
To identify the problem:
22, 80, 5000, 5432, 5672, 9000, 9042, 9050, 9080, 9101, 9102, 9103, 9200, 9201, 9202To do that, run any of the following commands:
sudo lsof -i -P -n | grep LISTENFor example, the lsof command returns a table consisting of lines that look something like this:
sudo netstat -tulpn | grep LISTEN
sudo nmap -sTU -O <%Private Cloud IP address%>
sshd 9406 root 4u IPv6 10473818 0t0 TCP *:22 (LISTEN)In this line, sshd is the application name, 22 is the port, and 9406 is the process number. LISTEN means the port is open and accepts new connections.
iptables -L -v -nShould the record containing the Private Cloud IP address include DROP as target, this means the SSH connection is dropped and controller is unable to deploy Cloud properly.
Possible reasons for this error are the insufficient number of CPU cores on your host machine and a lack of available memory.
To avoid the issue, consider increasing the number of CPUs and memory, following Private Cloud system requirements.
To identify the reason for this issue, collect logs from the Docker container running the executor service component by executing the following command:
docker logs executor
The command output should contain the following message: evaluation period has expired.
Consider requesting and re-entering the license key via the license server, or download a new evaluation build and install it from scratch.
Upon entering your Private Cloud instance via a web browser, you can see the Evaluation period has expired message:
Provided you are sure you have an appropriate license, this may be a signal of an issue.
If these methods don’t help you, make sure the license server is available for connection from the Cloud container. To do that, try the netcat (nc) utility, or telnet. Say, execute the following command:
docker exec -it controller /bin/sh
nc %the IP address of the server% 8443
If the command returns P, this means the license server is available.
If none of these helps, contact our support team at firstname.lastname@example.org.
This issue may occur on RedHat machines running Private Cloud. It leads to the following behavior:
Additional errors can be found in Docker logs, available by executing the following command:
journalctl -u docker
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
docker stop $(docker ps -q)
systemctl docker stop
systemctl docker start
docker start controllerOther Private Cloud containers will start automatically.
This issue can be identified in the controller container logs, which you can access by executing the following command: docker logs controller. If the SSH authentication failure is the reason for the issue, you may start the frontend component manually:
docker start frontend
Sometimes the controller component is unable to start other service components after reboot, returning errors like Access Denied or Authentication Required, whereupon the manual start with the docker start command works for the services in question.
The reason for this is most likely a problem with the SSH connection and authentication. To avoid this, make sure that:
When a host machine connects to the web via a proxy server, it may incorrectly handle the connections to the Docker registry and Private Cloud local registry.
To avoid this, you have to configure Docker in a way that will allow it to use an HTTP or HTTPS proxy to connect to the registry-1.docker.io URL and connect to the Private Cloud registry without proxy. For that purpose, use the NO_PROXY flag:
To make sure the issue was not caused by the incorrect configuration of addresses, check the /alc/controller/conf/public.json and /opt/anylogic-team-license-server/conf/serv.properties files in the Private Cloud installation directory, as well as the information on the license server web interface — all of these should use the identical address of the Private Cloud instance.
To properly re-configure the addresses and restart the services:
sudo service anylogic-tls stop
sudo service anylogic-tls start
docker stop controller
docker stop rest
docker start controller
To identify the issue, try executing the following command:
telnet %the address of your Private Cloud instance% 9050
If this returns Connection closed by foreign host, you have to make sure that the port 9050 is open and available both on the machine that hosts Private Cloud and the machine that runs AnyLogic.
Additionally, check whether the network contains routers with the embedded firewall, DPI system, or other traffic filtering systems.
This issue occurs after updates are performed on an existing Private Cloud instance. A migration issue in the controller component can force the rest component to restart very frequently. To identify it, check the log of the container that runs controller with the following command:
docker logs controller
Its output should report something like this:
2021-04-16 13:11:56:453 ERROR CONTROLLER - 2021-04-16T13:11:56.453 - migration REST: com.anylogic.cloud.migration.MigrationException: liquibase.exception.LockException: Could not acquire change log lock. Currently locked by 9ac02ab910d0 (172.17.0.8) since 4/8/21 11:02 PM
Additionally, the rest container log (run docker logs rest to open it) will report another error:
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
[main] ERROR org.springframework.boot.SpringApplication - Application startup failed
The most possible reason for this issue is a critical malfunction of some kind (for example, a power outage) that occurred during the execution of the update script.
To clean up affected files and solve the issue, do the following:
docker exec -ti -u postgres postgres psql anylogic_cloud -c 'truncate databasechangeloglock;'
docker restart controller
AnyLogic Private Cloud
AnyLogic Cloud options