Skip to content

Manual Installation

If you want to have more control over the installation process, you may perform a manual installation. ClusterControl requires a number of packages to be installed and configured, as described in the following list:

  • clustercontrol-mcc – ClusterControl graphical user interface (GUI).
  • clustercontrol-controller – ClusterControl CMON controller.
  • clustercontrol-notifications – ClusterControl notification module, to forward alarms and notifications to third-party tools like PagerDuty and Slack.
  • clustercontrol-ssh – ClusterControl web-based SSH module, to access the host via SSH directly from ClusterControl GUI.
  • clustercontrol-cloud – ClusterControl cloud module, to integrate with your cloud providers from ClusterControl GUI.
  • clustercontrol-proxy – ClusterControl controller proxying service for ClusterControl web user interface (GUI).
  • clustercontrol-kuber-proxy - ClusterControl module for integration to the Kubernetes environment.
  • clustercontrol-clud – ClusterControl cloud file manager module, to upload and download backups from cloud storage. It requires clustercontrol-cloud.
  • s9s-tools – ClusterControl command-line interface (CLI).

Note

Installing and uninstalling ClusterControl should not bring any downtime to the managed database cluster.

Requirements

Make sure the following is ready prior to this installation:

  • The ClusterControl host must be running on the supported operating system. See Operating System.
  • Verify that sudo is working properly if you are using a non-root user. See Operating System User
  • You must have an internet connection on the ClusterControl node during the installation process. Otherwise, see Offline Installation.

Installation Steps

The steps described in the following sections should be performed on the ClusterControl node unless specified otherwise.

Red Hat/CentOS/Rocky Linux/AlmaLinux

  1. Set up ClusterControl repository.

  2. Set up ClusterControl CLI repository.

  3. Disable SElinux and open required ports (or stop firewall):

    sed -i 's|SELINUX=enforcing|SELINUX=disabled|g' /etc/selinux/config
    setenforce 0
    systemctl stop firewalld
    
  4. Install required packages via package manager:

    dnf -y install wget dmidecode hostname python3 mariadb mariadb-server
    alternatives --set python /usr/bin/python3
    
  5. Install EPEL packages:

    # RHEL/CentOS/Rocky/Alma 9
    dnf -y install epel-release
    
    # RHEL/Alma/Rocky 8
    dnf config-manager --set-enabled powertools
    dnf -y install epel-release epel-next-release
    
  6. Install ClusterControl packages:

    dnf -y install clustercontrol-mcc \
                clustercontrol-controller \
                clustercontrol-ssh \
                clustercontrol-notifications \
                clustercontrol-cloud \
                clustercontrol-clud \
                clustercontrol-proxy \
                clustercontrol-kuber-proxy \
                s9s-tools
    
  7. Start the MariaDB server, enable it on boot and set the database root password:

    systemctl start mariadb
    systemctl enable mariadb
    mysqladmin -uroot password 'themysqlrootpassword'
    
  8. Create a database user called cmon, and grant proper privileges:

    mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
    mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
    mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
    mysql -uroot -p -e 'FLUSH PRIVILEGES'
    

    Replace {cmonpassword} with respective value and {controller_ip_address} with the valid FQDN or IP address of the ClusterControl node.

  9. Generate a ClusterControl key to be used by the --rpc-token option further below:

    $ uuidgen | tr -d '-'
    6856d96a19d049aa8a7f4a5ba57a34740b3faf57
    
  10. Initialize the ClusterControl Controller service, cmon by running the following command:

    cmon --init \
         --mysql-hostname="127.0.0.1" \
         --mysql-port="3306" \
         --mysql-username="cmon" \
         --mysql-password="{cmonpassword}" \
         --mysql-database="cmon" \
         --hostname="{ClusterControl Primary IP Address}" \
         --rpc-token="{ClusterControl API key as generated above}" \
         --controller-id="clustercontrol"
    
    Example
    $ cmon --init \
           --mysql-hostname="127.0.0.1" \
           --mysql-port="3306" \
           --mysql-username="cmon" \
           --mysql-password="xxxx" \
           --mysql-database="cmon" \
           --hostname="10.10.10.13" \
           --rpc-token="dcd17b14e88b47f8ac7f25cd85508fb0" \
           --controller-id="clustercontrol"
       The --init option received, initializing the controller.
       Verifying the Cmon Database...
       Cmon Database connect success, the database is not yet created, ok.
       Checking the Cmon Database schema...
       Cmon Database does not exist, will be created now.
       Applying modifications from 'cmon_db.sql,cmon_db_mods_hotfix.sql,cmon_data.sql'.
    
       Verifying connection...
       Initializing the user manager.
       User manager is creating system users.
       Checking that the system users exist.
       Creating system groups: admins
       Creating system groups: users
       Creating system groups: nobody
       Creating system user.
       Creating nobody user.
      Creating admin user.
    

    Attention

    The value of the hostname must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. The cmon user password taken from previous step when creating the user.

  11. ClusterControl event and cloud modules require their service definition inside /etc/default/cmon. Create the file and add the following lines:

    EVENTS_CLIENT="http://127.0.0.1:9510"
    CLOUD_SERVICE="http://127.0.0.1:9518"
    
  12. Initialize the ClusterControl web application to be started on port 443:

    ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc
    
    Example
    $ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc
    ClusterControl Manager - admin CLI v2.2
    Controller 127.0.0.1:9501 registered successfully
    Changing frontend_path from /app to /var/www/html/clustercontrol-mcc
    File /var/www/html/clustercontrol-mcc/config.js updated successfully
    Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully
    Please restart 'cmon-proxy' service to apply changes
    

    Tip

    If you want to use your own SSL certificate, update the tls_key and tls_cert values inside /usr/share/ccmgr/ccmgr.yaml accordingly.

  13. Enable ClusterControl daemons on boot and start them:

    systemctl enable cmon cmon-ssh cmon-events cmon-cloud cmon-proxy kuber-proxy
    systemctl start cmon cmon-ssh cmon-events cmon-cloud cmon-proxy kuber-proxy
    
  14. Create a user called ccsetup, for new registration purposes (if this user exists, ClusterControl GUI first run will default to registration page instead):

    export S9S_USER_CONFIG=/tmp/ccsetup.conf
    s9s user --create --new-password=admin --group=admins --email-address="{your_email_address}" --controller="https://localhost:9501" ccsetup
    unset S9S_USER_CONFIG
    
  15. Open ClusterControl GUI at https://<ClusterControl_host>/ and create the default admin user by specifying a username (username "admin" is reserved) and password on the welcome page.

  16. Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the root user to connect to the managed hosts. To generate an SSH key for the root user, do:

    $ whoami
      root
    $ ssh-keygen -t rsa # Press enter for all prompts
    

    Note

    If you are running as sudoer, the default SSH key will be located under /home/$USER/.ssh/id_rsa. See Operating System User.

  17. Before creating or importing a database server/cluster into ClusterControl, set up passwordless SSH from the ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:

    ssh-copy-id -i ~/.ssh/id_rsa <SSH user>@<IP address of the target node>
    

    Replace <SSH user> and <IP address of the target node> with appropriate values. Repeat the command for all target hosts.

The installation is complete and you can start to import existing or deploy a new database cluster. See User Guide to start using ClusterControl.

Debian/Ubuntu

The following steps should be performed on the ClusterControl node unless specified otherwise. Omit sudo if you are installing as the root user.

  1. Set up ClusterControl repository.

  2. Set up ClusterControl CLI repository.

  3. If you have AppArmor running, disable it and open the required firewall ports (or stop iptables):

    bash sudo systemctl stop apparmor sudo systemctl disable apparmor sudo systemctl mask apparmor sudo systemctl stop ufw # or nftables or iptables

  4. Install ClusterControl dependencies:

    sudo apt-get update
    sudo apt-get install -y python3 apache2 software-properties-common mysql-client mysql-server
    update-alternatives --install /usr/bin/python python /usr/bin/python3 1
    
  5. Install the ClusterControl package:

    sudo apt-get install -y clustercontrol-controller \
         clustercontrol-mcc \
         clustercontrol-ssh \
         clustercontrol-notifications \
         clustercontrol-cloud \
         clustercontrol-clud \
         clustercontrol-proxy \
         clustercontrol-kuber-proxy \
         s9s-tools
    
  6. Start MySQL server, enable it on boot and set a MySQL root password by using the mysql_secure_installation script:

    systemctl start mysql
    systemctl enable mysql
    mysql_secure_installation
    
  7. Create a database user called cmon and grant the right database privileges:

    mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
    mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
    mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
    mysql -uroot -p -e 'FLUSH PRIVILEGES'
    
  8. Generate a ClusterControl key to be used by --rpc-token option further below:

    $ uuidgen | tr -d '-'
    6856d96a19d049aa8a7f4a5ba57a34740b3faf57
    
  9. Initialize the ClusterControl Controller service called cmon by running the following command:

    cmon --init \
         --mysql-hostname="127.0.0.1" \
         --mysql-port="3306" \
         --mysql-username="cmon" \
         --mysql-password="{cmonpassword}" \
         --mysql-database="cmon" \
         --hostname="{ClusterControl Primary IP Address}" \
         --rpc-token="{ClusterControl API key as generated above}" \
         --controller-id="clustercontrol"
    
    Example
    $ cmon --init \
           --mysql-hostname="127.0.0.1" \
           --mysql-port="3306" \
           --mysql-username="cmon" \
           --mysql-password="xxxx" \
           --mysql-database="cmon" \
           --hostname="10.10.10.13" \
           --rpc-token="dcd17b14e88b47f8ac7f25cd85508fb0" \
           --controller-id="clustercontrol"
       The --init option received, initializing the controller.
       Verifying the Cmon Database...
       Cmon Database connect success, the database is not yet created, ok.
       Checking the Cmon Database schema...
       Cmon Database does not exist, will be created now.
       Applying modifications from 'cmon_db.sql,cmon_db_mods_hotfix.sql,cmon_data.sql'.
    
       Verifying connection...
       Initializing the user manager.
       User manager is creating system users.
       Checking that the system users exist.
       Creating system groups: admins
       Creating system groups: users
       Creating system groups: nobody
       Creating system user.
       Creating nobody user.
      Creating admin user.
    

    Attention

    The value of the hostname must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. The cmon user password taken from previous step when creating the user.

  10. ClusterControl’s event and cloud modules require /etc/default/cmon for service definition. Create the file and add the following lines:

    EVENTS_CLIENT="http://127.0.0.1:9510"
    CLOUD_SERVICE="http://127.0.0.1:9518"
    
  11. Initialize the ClusterControl web application to be started on port 443:

    ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc
    
    Example
    $ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc
    ClusterControl Manager - admin CLI v2.2
    Controller 127.0.0.1:9501 registered successfully
    Changing frontend_path from /app to /var/www/html/clustercontrol-mcc
    File /var/www/html/clustercontrol-mcc/config.js updated successfully
    Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully
    Please restart 'cmon-proxy' service to apply changes
    

    Tip

    If you want to use your own SSL certificate, update the tls_key and tls_cert values inside /usr/share/ccmgr/ccmgr.yaml accordingly.

  12. Enable ClusterControl on boot and start them:

    systemctl daemon-reload
    systemctl enable cmon cmon-ssh cmon-events cmon-cloud cmon-proxy kuber-proxy
    systemctl restart cmon cmon-ssh cmon-events cmon-cloud cmon-proxy kuber-proxy
    
  13. Create a user called ccsetup, for new registration purposes (if this user exists, ClusterControl GUI first run will default to registration page instead):

    export S9S_USER_CONFIG=/tmp/ccsetup.conf
    s9s user --create --new-password=admin --group=admins --email-address="{your_email_address}" --controller="https://localhost:9501" ccsetup
    unset S9S_USER_CONFIG
    
  14. Open ClusterControl GUI at https://<ClusterControl_host>/ and create the default admin user by specifying a username (username "admin" is reserved) and password on the welcome page.

  15. Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the "root" user to connect to the managed hosts. To generate an SSH key for the root user, do:

    $ whoami
      root
    $ ssh-keygen -t rsa # Press enter for all prompts
    

    Note

    If you are running as sudoer, the default SSH key will be located under /home/$USER/.ssh/id_rsa. See Operating System User.

  16. Before importing a database server/cluster into ClusterControl or deploy a new cluster, set up passwordless SSH from ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:

    ssh-copy-id -i ~/.ssh/id_rsa <SSH user>@<IP address of the target node>
    

    Replace <SSH user> and <IP address of the target node> with appropriate values. Repeat the command for all target hosts.

The installation is complete and you can start to import existing or deploy a new database cluster. See User Guide to start using ClusterControl.

Troubleshooting Issues

Failed to create ccsetup user

In some cases, the ccsetup user was failed to be created with the following error:

s9s user --create --new-password=admin --group=admins --email-address="[email protected]" --controller="https://localhost:9501" ccsetup
Connect to localhost:9501 failed(111): Connection refused.

Check the ClusterControl Controller log messages in /var/log/cmon.log, and see if there is the following error:

2025-08-21T07:33:33.296Z : (INFO) Checking if CmonDb access is working properly.
2025-08-21T07:33:33.297Z : (WARNING) Cmon DB connection error: Access denied for user 'cmon'@'localhost' (using password: YES) (errno: 1045)
2025-08-21T07:33:33.298Z : (INFO) CmonDb connection or query failure. Error code: 1045, Message: No connection (Access denied for user 'cmon'@'localhost' (using password: YES))
2025-08-21T07:33:33.298Z : (INFO) Lets block and wait for working CmonDb connection.

Remove the existing /etc/cmon.cnf configuration, and run again cmon --init script.

$ rm -rf /etc/cmon.cnf
$ cmon --init \
     --mysql-hostname="127.0.0.1" \
     --mysql-port="3306" \
     --mysql-username="cmon" \
     --mysql-password="xxxx" \
     --mysql-database="cmon" \
     --hostname="10.10.10.13" \
     --rpc-token="dcd17b14e88b47f8ac7f25cd85508fb0" \
     --controller-id="clustercontrol"
The --init option received, initializing the controller.
Generating cmon configuration...
------8<------8<------8<------8<------8<------8<------8<------
#
# Configuration file for the Cmon Controller.
#

#
# The name or IP address of the Cmon Controller.
#
hostname=10.10.10.13

#
# Cmon Database credentials. The controller will use
# this database to store its own data structures.
#
mysql_hostname=127.0.0.1
mysql_port=3306
mysql_password='xxxx'
cmon_user=cmon
cmon_db=cmon
rpc_key=dcd17b14e88b47f8ac7f25cd85508fb0

------8<------8<------8<------8<------8<------8<------8<------
Verifying the Cmon Database...
Cmon Database connect is successful, database exists.
Checking the Cmon Database schema...
Applying modifications from 'cmon_db_mods-2.2.3-2.3.2.sql,cmon_db_mods_hotfix.sql'.

Verifying connection...
Initializing the user manager.
User manager is creating system users.
Checking that the system users exist.

After that, re-create the ccsetup user using ClusterControl CLI:

 s9s user --create --new-password=admin --group=admins --email-address="[email protected]" --controller="https://localhost:9501" ccsetup

Error page "Not Found" after installation

In highly restricted environments, after a manual installation of ClusterControl is complete, users may encounter a "Not Found" error when attempting to access the ClusterControl GUI page at https://{ClusterControl_host}/. This issue typically arises due to incorrect directory permissions within /var/www/html/clustercontrol-mcc, mostly originated from a strict default umask.

Example

ClusterControl Error UI

Check the permission for group and others in every directory under /var/www/html/clustercontrol-mcc. The permission of directory should be: drwxr-xr-x (755). Permissions for each directory structure should be aligned and if they are not, you must make the necessary changes.

$ pwd
/var

$ ls -ltr www
drwxr-x--x.  3 root root   18 Jun 12 10:37 www

$ chmod g+r www
$ ls -ltr www
drwxr-xr--x.  3 root root   18 Jun 12 10:37 www

$ cd www
$ ls -ltr html
drwxr-x--x. 4 root root 50 Jun 12 10:48 html

$ chmod o+r html
$ cd html
$ ls -ltr
drwxr-xr-x. 3 root root 4096 Jun 12 10:46 clustercontrol-mcc
drwxr-x---. 2 root root    6 Jun 12 10:48 cmon-repos
After the permission for group and others are correct, restart the cmon-proxy service:

systemctl restart cmon-proxy