Table of Contents
Deprecation Notice: ClusterControl Docker Image
Effective immediately, the standalone ClusterControl Docker image is deprecated and will no longer receive updates or official support. This change is part of our ongoing effort to streamline deployment workflows and provide a more robust, scalable solution for managing your database clusters.
What’s Changing
- Docker Image Deprecation: We will no longer publish new versions or provide support for the existing ClusterControl Docker image.
- Replacement: All new deployments should transition to the ClusterControl Helm Chart, which offers a more integrated, reliable, and Kubernetes-native method for installing and managing ClusterControl.
Why the Change?
- Enhanced Features: The ClusterControl Helm Chart includes improved configuration, scalability, and lifecycle management features that are not available with a standalone Docker container.
- Kubernetes-Native: By focusing on a Helm-based deployment, we can provide a streamlined approach that fully aligns with modern, containerized environments and best practices.
- Service decoupling: ClusterControl can run in a containerized environment with the help of other auxilary containers like cmon-sd, prometheus/victoriametrics, nginx ingress, innodb cluster (using MySQL Operator).
Timeline
- Immediate Deprecation: No further releases or maintenance updates will be provided for the Docker image.
- Support Discontinuation: Any existing issues or questions related to the Docker image will be addressed on a best-effort basis until the end of 2025, if applicable. After this date, the image is considered fully unsupported.
Next Steps
- Plan Your Migration: Review our ClusterControl Helm Chart Documentation to understand requirements and migration steps.
- Backup & Transition: Before decommissioning the old container, ensure your data and configurations are safely backed up.
- Deploy via Helm: Use Helm to install and manage the latest version of ClusterControl, leveraging new features and improvements.
If you have any questions or need assistance migrating to the Helm chart, please reach out to our support team or consult the detailed installation guides available in our documentation. We appreciate your cooperation and look forward to continuing to improve your ClusterControl experience with this new, more powerful deployment method.
This image has been deprecated. See Deprecation Notice: ClusterControl Docker Image.
The Docker image comes with ClusterControl installed and configured with all its components, so you can immediately use it to manage and monitor your existing databases.
Having a Docker image for ClusterControl at the moment is convenient regarding how quickly it is to get it up and running and it’s 100% reproducible. Docker users can now start testing ClusterControl since we have the Docker image that everyone can pull down from Docker Hub and then launch the tool.
It is a start and we plan to add better integration with the Docker API in future releases to transparently manage Docker containers/images within ClusterControl, e.g., to launch/manage and deploy database clusters using Docker images.
Build the Image
The Dockerfile is available from our Github repository. You can build it manually by cloning the repository:
$ git clone https://github.com/severalnines/docker
$ cd docker/
$ docker build -t severalnines/clustercontrol .
Running the Container
If you upgrade from ClusterControl 1.9.6 (or older) to 1.9.7 (Sept 2023), please see UPGRADING-TO-1.9.7.md. There are additional steps to stop and recreate the container to perform a proper upgrade.
Please refer to the Docker Hub page for the latest instructions. Use the docker pull
command to download the image:
$ docker pull severalnines/clustercontrol
To run a ClusterControl container, the simplest command would be:
$ docker run -d severalnines/clustercontrol
However, for production use, users are advised to run with a sticky IP address/hostname and persistent volumes to survive across restarts, upgrades, and rescheduling, as shown below:
Use the following command to run:
# Create a Docker network for persistent hostname & ip address
$ docker network create --subnet=192.168.10.0/24 db-cluster
# Start the container
$ docker run -d --name clustercontrol \
--network db-cluster \
--ip 192.168.10.10 \
-h clustercontrol \
-p 5000:80 \
-p 5001:443 \
-p 9443:9443 \
-p 9999:9999 \
-v /storage/clustercontrol/cmon.d:/etc/cmon.d \
-v /storage/clustercontrol/datadir:/var/lib/mysql \
-v /storage/clustercontrol/sshkey:/root/.ssh \
-v /storage/clustercontrol/cmonlib:/var/lib/cmon \
-v /storage/clustercontrol/backups:/root/backups \
-v /storage/clustercontrol/prom-data:/var/lib/prometheus \
-v /storage/clustercontrol/prom-conf:/etc/prometheus \
severalnines/clustercontrol
Once started, ClusterControl is accessible at https://{Docker_host}:5001/
. You should see the welcome page to create a default admin user. Specify an admin username (“admin” is reserved) and specify passwords for that user. By default, MySQL user’s root and cmon will be using password
and cmon
as the default password respectively. You can override this value with the -e
flag, as in the example below:
$ docker run -d --name clustercontrol \
--network db-cluster \
--ip 192.168.10.10 \
-h clustercontrol \
-e CMON_PASSWORD=MyCM0n22 \
-e MYSQL_ROOT_PASSWORD=SuP3RMan \
-p 5000:80 \
-p 5001:443 \
-p 9443:9443 \
-p 9999:9999 \
-v /storage/clustercontrol/cmon.d:/etc/cmon.d \
-v /storage/clustercontrol/datadir:/var/lib/mysql \
-v /storage/clustercontrol/sshkey:/root/.ssh \
-v /storage/clustercontrol/cmonlib:/var/lib/cmon \
-v /storage/clustercontrol/backups:/root/backups \
-v /storage/clustercontrol/prom-data:/var/lib/prometheus \
-v /storage/clustercontrol/prom-conf:/etc/prometheus \
severalnines/clustercontrol
The suggested port mappings are:
- 5000 → 80 – ClusterControl GUI v2 HTTP
- 5001 → 443 – ClusterControl GUI v2 HTTPS
- 9443 → 9443 – ClusterControl GUI v1 HTTPS
- 9999 → 9999 – Backup streaming port, only if ClusterControl is the database backup destination
The recommended persistent volumes are:
/etc/cmon.d
– ClusterControl configuration files./var/lib/mysql
– MySQL datadir to host cmon and dcps database./root/.ssh
– SSH private and public keys./var/lib/cmon
– ClusterControl internal files./root/backups
– Default backup directory only if ClusterControl is the database backup destination./var/lib/prometheus
– Prometheus data directory./etc/prometheus
– Prometheus configuration directory.
Verify the container is running by using the ps command:
$ docker ps
After a moment, you should be able to access the following ClusterControl web GUIs (assuming the Docker host IP address is 192.168.11.111):
- ClusterControl GUI v2 HTTP:
http://192.168.11.111:5000/
- ClusterControl GUI v2 HTTPS:
https://192.168.11.111:5001/
(recommended) - ClusterControl GUI v1 HTTPS:
https://192.168.11.111:9443/clustercontrol
Note that starting from ClusterControl 1.9.7, ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. ClusterControl GUI v1 has reached the end of the development cycle and is considered a feature-freeze product. All new developments will be happening on ClusterControl GUI v2.
For more examples of deployments with Docker images, please refer to ClusterControl on Docker and the Docker image GitHub page. For more info on the configuration options, please refer to ClusterControl’s Docker Hub page.
Environment Variables
Variable | Description and Example |
---|---|
CMON_PASSWORD={string} |
|
MYSQL_ROOT_PASSWORD={string} |
|
CMON_STOP_TIMEOUT={integer} |
|
Service Management
ClusterControl requires several processes to be running:
- mariadbd – ClusterControl database runs on MariaDB 10.5.
- httpd – Web server running on Apache 2.4.
- php-fpm – PHP 7.4 FastCGI process manager for ClusterControl GUI v1.
- cmon – ClusterControl backend daemon. The brain of ClusterControl which depends on mariadbd.
- cmon-ssh – ClusterControl web-based SSH daemon, which depends on cmon and httpd.
- cmon-events – ClusterControl notifications daemon, which depends on cmon and httpd.
- cmon-cloud – ClusterControl cloud integration daemon, which depends on cmon and httpd.
These processes are being controlled by Supervisord, a process control system. To manage a process, one would use supervisorctl
client as shown in the following example:
[root@docker-host]$ docker exec -it clustercontrol /bin/bash
$ supervisorctl
cmon RUNNING pid 504, uptime 0:11:37
cmon-cloud RUNNING pid 505, uptime 0:11:37
cmon-events RUNNING pid 506, uptime 0:11:37
cmon-ssh RUNNING pid 507, uptime 0:11:37
httpd RUNNING pid 509, uptime 0:11:37
mariadbd RUNNING pid 503, uptime 0:11:37
php-fpm RUNNING pid 508, uptime 0:11:37
supervisor> restart cmon
cmon: stopped
cmon: started
supervisor> status cmon
cmon RUNNING pid 504, uptime 0:00:21
supervisor>
In some cases, you might need to restart the corresponding services after a manual upgrade or configuration tuning. Details on the start commands can be found inside conf/supervisord.conf
.
Disclaimer
Although Severalnines offers ClusterCluster as a Docker image, it is not intended for production usage. ClusterControl product direction is never intended to run on a container environment due to its internal logic and system design. We are maintaining the Docker image on a best-effort basis, and it is not part of the product development projection and pipeline.
Note that starting from ClusterControl 1.9.7, ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. ClusterControl GUI v1 has reached the end of the development cycle and is considered a feature-freeze product. All new developments will be happening on ClusterControl GUI v2.