Manual Installation
If you want to have more control over the installation process, you may perform a manual installation. ClusterControl requires a number of packages to be installed and configured, as described in the following list:
clustercontrol-mcc
– ClusterControl graphical user interface (GUI).clustercontrol-controller
– ClusterControl CMON controller.clustercontrol-notifications
– ClusterControl notification module, to forward alarms and notifications to third-party tools like PagerDuty and Slack.clustercontrol-ssh
– ClusterControl web-based SSH module, to access the host via SSH directly from ClusterControl GUI.clustercontrol-cloud
– ClusterControl cloud module, to integrate with your cloud providers from ClusterControl GUI.clustercontrol-proxy
– ClusterControl controller proxying service for ClusterControl web user interface (GUI).clustercontrol-kuber-proxy
- ClusterControl module for integration to the Kubernetes environment.clustercontrol-clud
– ClusterControl cloud file manager module, to upload and download backups from cloud storage. It requiresclustercontrol-cloud
.s9s-tools
– ClusterControl command-line interface (CLI).
Note
Installing and uninstalling ClusterControl should not bring any downtime to the managed database cluster.
Requirements
Make sure the following is ready prior to this installation:
- The ClusterControl host must be running on the supported operating system. See Operating System.
- Verify that sudo is working properly if you are using a non-root user. See Operating System User
- You must have an internet connection on the ClusterControl node during the installation process. Otherwise, see Offline Installation.
Installation Steps
The steps described in the following sections should be performed on the ClusterControl node unless specified otherwise.
Red Hat/CentOS/Rocky Linux/AlmaLinux
-
Set up ClusterControl repository.
-
Set up ClusterControl CLI repository.
-
Disable SElinux and open required ports (or stop firewall):
-
Install required packages via package manager:
-
Install EPEL packages:
-
Install ClusterControl packages:
-
Start the MariaDB server, enable it on boot and set the database root password:
-
Create a database user called
cmon
, and grant proper privileges:mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' mysql -uroot -p -e 'FLUSH PRIVILEGES'
Replace
{cmonpassword}
with respective value and{controller_ip_address}
with the valid FQDN or IP address of the ClusterControl node. -
Generate a ClusterControl key to be used by the
--rpc-token
option further below: -
Initialize the ClusterControl Controller service,
cmon
by running the following command:cmon --init \ --mysql-hostname="127.0.0.1" \ --mysql-port="3306" \ --mysql-username="cmon" \ --mysql-password="{cmonpassword}" \ --mysql-database="cmon" \ --hostname="{ClusterControl Primary IP Address}" \ --rpc-token="{ClusterControl API key as generated above}" \ --controller-id="clustercontrol"
Example
$ cmon --init \ --mysql-hostname="127.0.0.1" \ --mysql-port="3306" \ --mysql-username="cmon" \ --mysql-password="xxxx" \ --mysql-database="cmon" \ --hostname="10.10.10.13" \ --rpc-token="dcd17b14e88b47f8ac7f25cd85508fb0" \ --controller-id="clustercontrol" The --init option received, initializing the controller. Verifying the Cmon Database... Cmon Database connect success, the database is not yet created, ok. Checking the Cmon Database schema... Cmon Database does not exist, will be created now. Applying modifications from 'cmon_db.sql,cmon_db_mods_hotfix.sql,cmon_data.sql'. Verifying connection... Initializing the user manager. User manager is creating system users. Checking that the system users exist. Creating system groups: admins Creating system groups: users Creating system groups: nobody Creating system user. Creating nobody user. Creating admin user.
Attention
The value of the
hostname
must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. Thecmon
user password taken from previous step when creating the user. -
ClusterControl event and cloud modules require their service definition inside
/etc/default/cmon
. Create the file and add the following lines: -
Initialize the ClusterControl web application to be started on port 443:
Example
$ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc ClusterControl Manager - admin CLI v2.2 Controller 127.0.0.1:9501 registered successfully Changing frontend_path from /app to /var/www/html/clustercontrol-mcc File /var/www/html/clustercontrol-mcc/config.js updated successfully Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully Please restart 'cmon-proxy' service to apply changes
Tip
If you want to use your own SSL certificate, update the
tls_key
andtls_cert
values inside/usr/share/ccmgr/ccmgr.yaml
accordingly. -
Enable ClusterControl daemons on boot and start them:
-
Create a user called
ccsetup
, for new registration purposes (if this user exists, ClusterControl GUI first run will default to registration page instead): -
Open ClusterControl GUI at
https://<ClusterControl_host>/
and create the default admin user by specifying a username (username "admin" is reserved) and password on the welcome page. -
Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the root user to connect to the managed hosts. To generate an SSH key for the root user, do:
Note
If you are running as sudoer, the default SSH key will be located under
/home/$USER/.ssh/id_rsa
. See Operating System User. -
Before creating or importing a database server/cluster into ClusterControl, set up passwordless SSH from the ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:
Replace
<SSH user>
and<IP address of the target node>
with appropriate values. Repeat the command for all target hosts.
The installation is complete and you can start to import existing or deploy a new database cluster. See User Guide to start using ClusterControl.
Debian/Ubuntu
The following steps should be performed on the ClusterControl node unless specified otherwise. Omit sudo if you are installing as the root user.
-
Set up ClusterControl repository.
-
Set up ClusterControl CLI repository.
-
If you have AppArmor running, disable it and open the required firewall ports (or stop iptables):
bash sudo systemctl stop apparmor sudo systemctl disable apparmor sudo systemctl mask apparmor sudo systemctl stop ufw # or nftables or iptables
-
Install ClusterControl dependencies:
-
Install the ClusterControl package:
-
Start MySQL server, enable it on boot and set a MySQL root password by using the
mysql_secure_installation
script: -
Create a database user called
cmon
and grant the right database privileges:mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' mysql -uroot -p -e 'FLUSH PRIVILEGES'
-
Generate a ClusterControl key to be used by
--rpc-token
option further below: -
Initialize the ClusterControl Controller service called
cmon
by running the following command:cmon --init \ --mysql-hostname="127.0.0.1" \ --mysql-port="3306" \ --mysql-username="cmon" \ --mysql-password="{cmonpassword}" \ --mysql-database="cmon" \ --hostname="{ClusterControl Primary IP Address}" \ --rpc-token="{ClusterControl API key as generated above}" \ --controller-id="clustercontrol"
Example
$ cmon --init \ --mysql-hostname="127.0.0.1" \ --mysql-port="3306" \ --mysql-username="cmon" \ --mysql-password="xxxx" \ --mysql-database="cmon" \ --hostname="10.10.10.13" \ --rpc-token="dcd17b14e88b47f8ac7f25cd85508fb0" \ --controller-id="clustercontrol" The --init option received, initializing the controller. Verifying the Cmon Database... Cmon Database connect success, the database is not yet created, ok. Checking the Cmon Database schema... Cmon Database does not exist, will be created now. Applying modifications from 'cmon_db.sql,cmon_db_mods_hotfix.sql,cmon_data.sql'. Verifying connection... Initializing the user manager. User manager is creating system users. Checking that the system users exist. Creating system groups: admins Creating system groups: users Creating system groups: nobody Creating system user. Creating nobody user. Creating admin user.
Attention
The value of the
hostname
must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. Thecmon
user password taken from previous step when creating the user. -
ClusterControl’s event and cloud modules require
/etc/default/cmon
for service definition. Create the file and add the following lines: -
Initialize the ClusterControl web application to be started on port 443:
Example
$ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc ClusterControl Manager - admin CLI v2.2 Controller 127.0.0.1:9501 registered successfully Changing frontend_path from /app to /var/www/html/clustercontrol-mcc File /var/www/html/clustercontrol-mcc/config.js updated successfully Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully Please restart 'cmon-proxy' service to apply changes
Tip
If you want to use your own SSL certificate, update the
tls_key
andtls_cert
values inside/usr/share/ccmgr/ccmgr.yaml
accordingly. -
Enable ClusterControl on boot and start them:
-
Create a user called
ccsetup
, for new registration purposes (if this user exists, ClusterControl GUI first run will default to registration page instead): -
Open ClusterControl GUI at
https://<ClusterControl_host>/
and create the default admin user by specifying a username (username "admin" is reserved) and password on the welcome page. -
Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the "root" user to connect to the managed hosts. To generate an SSH key for the root user, do:
Note
If you are running as sudoer, the default SSH key will be located under
/home/$USER/.ssh/id_rsa
. See Operating System User. -
Before importing a database server/cluster into ClusterControl or deploy a new cluster, set up passwordless SSH from ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:
Replace
<SSH user>
and<IP address of the target node>
with appropriate values. Repeat the command for all target hosts.
The installation is complete and you can start to import existing or deploy a new database cluster. See User Guide to start using ClusterControl.
Troubleshooting Issues
Failed to create ccsetup
user
In some cases, the ccsetup
user was failed to be created with the following error:
s9s user --create --new-password=admin --group=admins --email-address="[email protected]" --controller="https://localhost:9501" ccsetup
Connect to localhost:9501 failed(111): Connection refused.
Check the ClusterControl Controller log messages in /var/log/cmon.log
, and see if there is the following error:
2025-08-21T07:33:33.296Z : (INFO) Checking if CmonDb access is working properly.
2025-08-21T07:33:33.297Z : (WARNING) Cmon DB connection error: Access denied for user 'cmon'@'localhost' (using password: YES) (errno: 1045)
2025-08-21T07:33:33.298Z : (INFO) CmonDb connection or query failure. Error code: 1045, Message: No connection (Access denied for user 'cmon'@'localhost' (using password: YES))
2025-08-21T07:33:33.298Z : (INFO) Lets block and wait for working CmonDb connection.
Remove the existing /etc/cmon.cnf
configuration, and run again cmon --init
script.
$ rm -rf /etc/cmon.cnf
$ cmon --init \
--mysql-hostname="127.0.0.1" \
--mysql-port="3306" \
--mysql-username="cmon" \
--mysql-password="xxxx" \
--mysql-database="cmon" \
--hostname="10.10.10.13" \
--rpc-token="dcd17b14e88b47f8ac7f25cd85508fb0" \
--controller-id="clustercontrol"
The --init option received, initializing the controller.
Generating cmon configuration...
------8<------8<------8<------8<------8<------8<------8<------
#
# Configuration file for the Cmon Controller.
#
#
# The name or IP address of the Cmon Controller.
#
hostname=10.10.10.13
#
# Cmon Database credentials. The controller will use
# this database to store its own data structures.
#
mysql_hostname=127.0.0.1
mysql_port=3306
mysql_password='xxxx'
cmon_user=cmon
cmon_db=cmon
rpc_key=dcd17b14e88b47f8ac7f25cd85508fb0
------8<------8<------8<------8<------8<------8<------8<------
Verifying the Cmon Database...
Cmon Database connect is successful, database exists.
Checking the Cmon Database schema...
Applying modifications from 'cmon_db_mods-2.2.3-2.3.2.sql,cmon_db_mods_hotfix.sql'.
Verifying connection...
Initializing the user manager.
User manager is creating system users.
Checking that the system users exist.
After that, re-create the ccsetup
user using ClusterControl CLI:
s9s user --create --new-password=admin --group=admins --email-address="[email protected]" --controller="https://localhost:9501" ccsetup
Error page "Not Found" after installation
In highly restricted environments, after a manual installation of ClusterControl is complete, users may encounter a "Not Found" error when attempting to access the ClusterControl GUI page at https://{ClusterControl_host}/
. This issue typically arises due to incorrect directory permissions within /var/www/html/clustercontrol-mcc
, mostly originated from a strict default umask.
Check the permission for group and others in every directory under /var/www/html/clustercontrol-mcc
. The permission of directory should be: drwxr-xr-x
(755). Permissions for each directory structure should be aligned and if they are not, you must make the necessary changes.
$ pwd
/var
$ ls -ltr www
drwxr-x--x. 3 root root 18 Jun 12 10:37 www
$ chmod g+r www
$ ls -ltr www
drwxr-xr--x. 3 root root 18 Jun 12 10:37 www
$ cd www
$ ls -ltr html
drwxr-x--x. 4 root root 50 Jun 12 10:48 html
$ chmod o+r html
$ cd html
$ ls -ltr
drwxr-xr-x. 3 root root 4096 Jun 12 10:46 clustercontrol-mcc
drwxr-x---. 2 root root 6 Jun 12 10:48 cmon-repos
cmon-proxy
service: