1. Home
  2. Docs
  3. ClusterControl
  4. Installation
  5. Automatic Installation
  6. Docker Image

Docker Image

The Docker image comes with ClusterControl installed and configured with all its components, so you can immediately use it to manage and monitor your existing databases.

Having a Docker image for ClusterControl at the moment is convenient regarding how quickly it is to get it up and running and it’s 100% reproducible. Docker users can now start testing ClusterControl since we have the Docker image that everyone can pull down from Docker Hub and then launch the tool.

It is a start and we plan to add better integration with the Docker API in future releases to transparently manage Docker containers/images within ClusterControl, e.g., to launch/manage and deploy database clusters using Docker images.

Build the Image

The Dockerfile is available from our Github repository. You can build it manually by cloning the repository:

$ git clone https://github.com/severalnines/docker
$ cd docker/
$ docker build -t severalnines/clustercontrol .

Running the Container

Attention

If you upgrade from ClusterControl 1.9.6 (or older) to 1.9.7 (Sept 2023), please see UPGRADING-TO-1.9.7.md. There are additional steps to stop and recreate the container to perform a proper upgrade.

Please refer to the Docker Hub page for the latest instructions. Use the docker pull command to download the image:

$ docker pull severalnines/clustercontrol

To run a ClusterControl container, the simplest command would be:

$ docker run -d severalnines/clustercontrol

However, for production use, users are advised to run with a sticky IP address/hostname and persistent volumes to survive across restarts, upgrades, and rescheduling, as shown below:

Use the following command to run:

# Create a Docker network for persistent hostname & ip address
$ docker network create --subnet=192.168.10.0/24 db-cluster

# Start the container
$ docker run -d --name clustercontrol \
--network db-cluster \
--ip 192.168.10.10 \
-h clustercontrol \
-p 5000:80 \
-p 5001:443 \
-p 9443:9443 \
-p 9999:9999 \
-v /storage/clustercontrol/cmon.d:/etc/cmon.d \
-v /storage/clustercontrol/datadir:/var/lib/mysql \
-v /storage/clustercontrol/sshkey:/root/.ssh \
-v /storage/clustercontrol/cmonlib:/var/lib/cmon \
-v /storage/clustercontrol/backups:/root/backups \
-v /storage/clustercontrol/prom-data:/var/lib/prometheus \
-v /storage/clustercontrol/prom-conf:/etc/prometheus \
severalnines/clustercontrol

Once started, ClusterControl is accessible at https://{Docker_host}:5001/. You should see the welcome page to create a default admin user. Specify an admin username (“admin” is reserved) and specify passwords for that user. By default, MySQL user’s root and cmon will be using password and cmon as the default password respectively. You can override this value with the -e flag, as in the example below:

$ docker run -d --name clustercontrol \
--network db-cluster \
--ip 192.168.10.10 \
-h clustercontrol \
-e CMON_PASSWORD=MyCM0n22 \
-e MYSQL_ROOT_PASSWORD=SuP3RMan \
-p 5000:80 \
-p 5001:443 \
-p 9443:9443 \
-p 9999:9999 \
-v /storage/clustercontrol/cmon.d:/etc/cmon.d \
-v /storage/clustercontrol/datadir:/var/lib/mysql \
-v /storage/clustercontrol/sshkey:/root/.ssh \
-v /storage/clustercontrol/cmonlib:/var/lib/cmon \
-v /storage/clustercontrol/backups:/root/backups \
-v /storage/clustercontrol/prom-data:/var/lib/prometheus \
-v /storage/clustercontrol/prom-conf:/etc/prometheus \
severalnines/clustercontrol

The suggested port mappings are:

  • 5000 → 80 – ClusterControl GUI v2 HTTP
  • 5001 → 443 – ClusterControl GUI v2 HTTPS
  • 9443 → 9443 – ClusterControl GUI v1 HTTPS
  • 9999 → 9999 – Backup streaming port, only if ClusterControl is the database backup destination

The recommended persistent volumes are:

  • /etc/cmon.d – ClusterControl configuration files.
  • /var/lib/mysql – MySQL datadir to host cmon and dcps database.
  • /root/.ssh – SSH private and public keys.
  • /var/lib/cmon – ClusterControl internal files.
  • /root/backups – Default backup directory only if ClusterControl is the database backup destination.
  • /var/lib/prometheus – Prometheus data directory.
  • /etc/prometheus – Prometheus configuration directory.

Verify the container is running by using the ps command:

$ docker ps

After a moment, you should be able to access the following ClusterControl web GUIs (assuming the Docker host IP address is 192.168.11.111):

  • ClusterControl GUI v2 HTTP: http://192.168.11.111:5000/
  • ClusterControl GUI v2 HTTPS: https://192.168.11.111:5001/ (recommended)
  • ClusterControl GUI v1 HTTPS: https://192.168.11.111:9443/clustercontrol

Note that starting from ClusterControl 1.9.7, ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. ClusterControl GUI v1 has reached the end of the development cycle and is considered a feature-freeze product. All new developments will be happening on ClusterControl GUI v2.

For more examples of deployments with Docker images, please refer to ClusterControl on Docker and the Docker image GitHub page. For more info on the configuration options, please refer to ClusterControl’s Docker Hub page.

Environment Variables

Variable Description and Example
CMON_PASSWORD={string}
  • MySQL password for user ‘cmon’. Default to ‘cmon’. Using the docker secret is recommended.
  • Example: CMON_PASSWORD=cmonP4s5
MYSQL_ROOT_PASSWORD={string}
  • MySQL root password for the ClusterControl container. Default to ‘password’. Using the docker secret is recommended.
  • Example: MYSQL_ROOT_PASSWORD=MyPassW0rd
CMON_STOP_TIMEOUT={integer}
  • How long to wait (in seconds) for CMON to gracefully stop (SIGTERM) during the container bootstrapping process. The default is 30.
  • If the timeout is exceeded, CMON will be stopped using SIGKILL.
  • Example: CMON_STOP_TIMEOUT=30

Service Management

ClusterControl requires several processes to be running:

  • mariadbd – ClusterControl database runs on MariaDB 10.5.
  • httpd – Web server running on Apache 2.4.
  • php-fpm – PHP 7.4 FastCGI process manager for ClusterControl GUI v1.
  • cmon – ClusterControl backend daemon. The brain of ClusterControl which depends on mariadbd.
  • cmon-ssh – ClusterControl web-based SSH daemon, which depends on cmon and httpd.
  • cmon-events – ClusterControl notifications daemon, which depends on cmon and httpd.
  • cmon-cloud – ClusterControl cloud integration daemon, which depends on cmon and httpd.

These processes are being controlled by Supervisord, a process control system. To manage a process, one would use supervisorctl client as shown in the following example:

[root@docker-host]$ docker exec -it clustercontrol /bin/bash
$ supervisorctl
cmon                             RUNNING   pid 504, uptime 0:11:37
cmon-cloud                       RUNNING   pid 505, uptime 0:11:37
cmon-events                      RUNNING   pid 506, uptime 0:11:37
cmon-ssh                         RUNNING   pid 507, uptime 0:11:37
httpd                            RUNNING   pid 509, uptime 0:11:37
mariadbd                         RUNNING   pid 503, uptime 0:11:37
php-fpm                          RUNNING   pid 508, uptime 0:11:37
supervisor> restart cmon
cmon: stopped
cmon: started
supervisor> status cmon
cmon                             RUNNING   pid 504, uptime 0:00:21
supervisor>

In some cases, you might need to restart the corresponding services after a manual upgrade or configuration tuning. Details on the start commands can be found inside conf/supervisord.conf.

Disclaimer

Although Severalnines offers ClusterCluster as a Docker image, it is not intended for production usage. ClusterControl product direction is never intended to run on a container environment due to its internal logic and system design. We are maintaining the Docker image on a best-effort basis, and it is not part of the product development projection and pipeline.

Note that starting from ClusterControl 1.9.7, ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. ClusterControl GUI v1 has reached the end of the development cycle and is considered a feature-freeze product. All new developments will be happening on ClusterControl GUI v2.

Was this article helpful to you? Yes 2 No 2