Upgrading ClusterControl
There are several ways to upgrade ClusterControl to the latest version. However, we recommend users to perform an online upgrade where the instructions are mostly up-to-date. For details on the latest release, see Release Notes.
Attention
If you are upgrading from ClusterControl v1 (1.9.8 and older), please see Upgrading from ClusterControl 1.x to 2.x. If you are running on legacy operating systems with GLIBC < 2.27 (RHEL 7, CentOS 7, Debian 9), you can only upgrade to version ClusterControl 2.1.0. See Upgrading to v2.1.0 for Legacy Operating Systems.
Attention
If you are upgrading from ClusterControl v2 (2.0.0 and later), proceed to Online Upgrade or Offline Upgrade.
Online upgrade
This is the recommended way to upgrade ClusterControl. The following upgrade procedures require an internet connection on the ClusterControl node.
Attention
If you are upgrading from ClusterControl v1 (1.9.8 and older), please see Upgrading from ClusterControl 1.x to 2.x. If you are running on legacy operating systems with GLIBC < 2.27 (RHEL 7, CentOS 7, Debian 9), you can only upgrade to version ClusterControl 2.1.0. See Upgrading to v2.1.0 for Legacy Operating Systems.
Red Hat/CentOS/Rocky Linux/AlmaLinux
-
Starting with ClusterControl 2.3.2, the web application is now included in the
clustercontrol-mcc
package, which supersedes theclustercontrol2
package. This new setup eliminates the dependency on the Apache web server. Instead, the web application is served by theccmgr
process, part of theclustercontrol-proxy
package. Consequently, to upgrade, the existing Apache server must be uninstalled using the following commands: -
Clear the repository cache so it will retrieve the latest repository list and perform the upgrade:
-
Initialize the new ClusterControl web application to be started on port 443:
Example
$ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc ClusterControl Manager - admin CLI v2.2 Controller 127.0.0.1:9501 registered successfully Changing frontend_path from /app to /var/www/html/clustercontrol-mcc File /var/www/html/clustercontrol-mcc/config.js updated successfully Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully Please restart 'cmon-proxy' service to apply changes
Tip
If you want to use your own SSL certificate, update the
tls_key
andtls_cert
values inside/usr/share/ccmgr/ccmgr.yaml
accordingly. -
Restart all ClusterControl services:
The upgrade is now complete. Verify the new version from the GUI's footer or by using the command cmon -v
. You should re-login if your ClusterControl GUI session is active.
Debian/Ubuntu
-
Starting with ClusterControl 2.3.2, the web application is now included in the
clustercontrol-mcc
package, which supersedes theclustercontrol2
package. This new setup eliminates the dependency on the Apache web server. Instead, the web application is served by theccmgr
process, part of theclustercontrol-proxy
package. Consequently, to upgrade, the existing Apache server and the oldclustercontrol2
package must be uninstalled using the following commands: -
Update the repository list and perform the upgrade by installing the latest version of the following packages:
-
The new web application defaults to listening on port 19052. For direct HTTPS access at
https://<ClusterControl_host>/
, it is advisable to change this to port 443 in/usr/share/ccmgr/ccmgr.yaml
. Execute the following command to make this change:Example
The following is the example output of the
ccmgr.yaml
after we changed the port to 443:Tip
If you want to use your own SSL certificate, update the
tls_key
andtls_cert
values inside/usr/share/ccmgr/ccmgr.yaml
accordingly. -
Restart all ClusterControl services:
The upgrade is now complete. Verify the new version from the GUI's footer or by using the command cmon -v
. You should re-login if your ClusterControl GUI session is active.
Offline upgrade
The following upgrade procedures can be performed without an internet connection on the ClusterControl node.
Attention
If you are upgrading from ClusterControl v1 (1.9.8 and older), please see Upgrading from ClusterControl 1.x to 2.x. If you are running on legacy operating systems with GLIBC < 2.27 (RHEL 7, CentOS 7, Debian 9), you can only upgrade to version ClusterControl 2.1.0. See Upgrading to v2.1.0 for Legacy Operating Systems.
Red Hat/CentOS/Rocky Linux/AlmaLinux
-
Download the latest version of ClusterControl-related RPM packages from Severalnines download site and the Severalnines Repository. There are some packages you need to download as explained below:
clustercontrol-mcc
- ClusterControl GUIclustercontrol-controller
- ClusterControl Controller (CMON)clustercontrol-notifications
- ClusterControl event moduleclustercontrol-ssh
- ClusterControl web-ssh moduleclustercontrol-cloud
- ClusterControl cloud moduleclustercontrol-clud
- ClusterControl cloud's file manager moduleclustercontrol-proxy
- ClusterControl proxy manager and web serverclustercontrol-kuber-proxy
- ClusterControl Kubernetes proxys9s-tools
- ClusterControl CLI (s9s) - Download from here.
-
Starting with ClusterControl 2.3.2, the web application is now included in the
clustercontrol-mcc
package, which supersedes theclustercontrol2
package. This new setup eliminates the dependency on the Apache web server. Instead, the web application is served by theccmgr
process, part of theclustercontrol-proxy
package. Consequently, to upgrade, the existing Apache server must be uninstalled using the following commands: -
Install using
dnf localinstall
to satisfy all dependencies: -
Initialize the new ClusterControl web application to be started on port 443:
Example
$ ccmgradm init --local-cmon -p 443 -f /var/www/html/clustercontrol-mcc ClusterControl Manager - admin CLI v2.2 Controller 127.0.0.1:9501 registered successfully Changing frontend_path from /app to /var/www/html/clustercontrol-mcc File /var/www/html/clustercontrol-mcc/config.js updated successfully Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully Please restart 'cmon-proxy' service to apply changes
Tip
If you want to use your own SSL certificate, update the
tls_key
andtls_cert
values inside/usr/share/ccmgr/ccmgr.yaml
accordingly. -
Restart all ClusterControl services:
The upgrade is now complete. Verify the new version from the GUI's footer or by using the command cmon -v
. You should re-login if your ClusterControl UI session is active.
Debian/Ubuntu
-
Download the latest version of ClusterControl-related DEB packages from Severalnines download site and Severalnines Repository. There are some packages you need to download as explained below:
clustercontrol-mcc
- ClusterControl GUIclustercontrol-controller
- ClusterControl Controller (CMON)clustercontrol-notifications
- ClusterControl event moduleclustercontrol-ssh
- ClusterControl web-ssh moduleclustercontrol-cloud
- ClusterControl cloud moduleclustercontrol-clud
- ClusterControl cloud's file manager moduleclustercontrol-proxy
- ClusterControl proxy manager and web serverclustercontrol-kuber-proxy
- ClusterControl Kubernetes proxys9s-tools
- ClusterControl CLI (s9s) - Download from here.libs9s0
- ClusterControl CLI (s9s) library - Download from here.
-
Starting with ClusterControl 2.3.2, the web application is now included in the
clustercontrol-mcc
package, which supersedes theclustercontrol2
package. This new setup eliminates the dependency on the Apache web server. Instead, the web application is served by theccmgr
process, part of theclustercontrol-proxy
package. Consequently, to upgrade, the existing Apache server and the oldclustercontrol2
package must be uninstalled using the following commands: -
Upload all the packages to the ClusterControl host and install them using
dpkg
command: -
The new web application defaults to listening on port 19052. For direct HTTPS access at
https://<ClusterControl_host>/
, it is advisable to change this to port 443 in/usr/share/ccmgr/ccmgr.yaml
. Execute the following command to make this change: -
Restart the ClusterControl services:
The upgrade is now complete. Verify the new version from the GUI's footer or by using the command cmon -v
. You should re-login if your ClusterControl GUI session is active.
Upgrading from ClusterControl v1 to v2
ClusterControl v2 features significant changes and improvements across its frontend web UI, user management, monitoring method and backend controller. Consequently, a proper upgrade and migration of several components are essential for its smooth operation.
The easiest way to upgrade from v1 to v2 is to install a new ClusterControl instance with the latest version (see Quickstart), and then import all clusters from the old ClusterControl (v1), to the new ClusterControl (v2) using the Import existing cluster feature. See Import Database Cluster for details. After imported, manually move the settings and configurations. However, the drawback is you will lose the monitoring data of the past and settings have to be migrated manually.
If this is not an option, kindly follow the upgrade instructions as explained in this section.
Requirements
To upgrade from legacy ClusterControl 1.x (1.9.8 and older) to the latest 2.x version, the following conditions must be met:
Operating System Compatibility
- ClusterControl 2.2.0 and later requires GLIBC 2.27 or later, which is available on RHEL/Rocky Linux/AlmaLinux ≥ 8, Ubuntu ≥ 20.04, and Debian ≥ 10. This is due to significant changes related to OpenSSL v3 and FIPS 140-2 compliance. See Operating System.
- If your system uses an older operating system with GLIBC < 2.27 (RHEL 7, CentOS 7, Debian 9), you can only upgrade to ClusterControl version 2.1.0. See Upgrading to v2.1.0 for Legacy Operating Systems for instructions.
User Management
-
ClusterControl v2 introduces a new user management system (User Management v2) where login is based on a username, such as
ccadmin
. If this is the way you log into ClusterControl, you can directly proceed to Online upgrade or Offline upgrade. -
If you currently log into the ClusterControl GUI v1 with an email address, you are using the legacy user management system. Before upgrading to ClusterControl v2, you must first migrate these existing users to the new User Management v2. Refer to Migrating ClusterControl users to v2 for instructions. If you log into ClusterControl GUI v1 using a username, skip the user migration step.
LDAP Integration
- LDAP in ClusterControl v2 is only compatible with the new User Management v2.
- If you have existing group mappings configured in LDAP v1, you will need to recreate these mappings in LDAP v2 after completing the user migration to v2. Refer to Migrating LDAP users to v2 for details.
Monitoring
- ClusterControl v2 defaults to agent-based monitoring using Prometheus and exporters for collecting time-series data (CPU, RAM, disk, etc.).
- After migrating to User Management v2, you can enable agent-based monitoring for each cluster via ClusterControl GUI v1 or v2 → choose a cluster → Dashboards → Enable agent-based monitoring. ClusterControl will then deploy the necessary exporters on each database node and configure the Prometheus targets accordingly. If this is already enabled in your ClusterControl v1, you can directly upgrade the packages to ClusterCotnrol v2.x by following Online upgrade or Offline upgrade.
To summarize, here are the main difference between ClusterControl v1 and v2:
Aspect | ClusterControl v1 | ClusterControl v2 |
---|---|---|
User management |
|
|
Monitoring |
|
|
LDAP | LDAP is handled by ClusterControl GUI | LDAP is handled by ClusterControl controller (cmon) |
GUI URL | https://<ClusterControl_host>/clustercontrol |
https://<ClusterControl_host> |
Interface | ![]() |
![]() |
Migrating monitoring to agent-based monitoring
Starting from ClusterControl v2, time-series monitoring will be performed by Prometheus, thus you need to go to ClusterControl GUI v1 → choose a cluster → Dashboards → Enable Agent-Based Monitoring to enable Prometheus agent-based monitoring for every cluster. ClusterControl will deploy Prometheus (default to ClusterControl host) with the corresponding exporters on all managed nodes. If you have many clusters and want to simplify this process, you may use ClusterControl CLI with --deploy-agents
flag.
Example
The following commands enable agent-based monitoring for 7 clusters, one cluster at a time:
s9s cluster --cluster-id=3 --deploy-agents --log
s9s cluster --cluster-id=16 --deploy-agents --log
s9s cluster --cluster-id=26 --deploy-agents --log
s9s cluster --cluster-id=46 --deploy-agents --log
s9s cluster --cluster-id=56 --deploy-agents --log
s9s cluster --cluster-id=57 --deploy-agents --log
s9s cluster --cluster-id=58 --deploy-agents --log
To verify if every cluster has agent-based monitoring enabled:
Make sure the monitoring dashboards at ClusterControl GUI v1 → choose a cluster → Dashboards are populated correctly for every cluster.
Check if an instance with a P
(means Prometheus) role is in the list of every cluster:
Example
The following output shows that every cluster has its own Prometheus instance (pay attention at the left-most column):
$ s9s nodes --list --long | grep ^P
Po-- 2.29.2 3 MariaDB Replication 10.4 192.168.61.210 9090 Pro…
Po-- 2.29.2 16 PROD - MySQL Galera (Percona 8.0) 192.168.61.210 9090 Pro…
Po-- 2.29.2 17 DR - MySQL Galera (Percona 8.0) 192.168.61.210 9090 Pro…
Po-- 2.29.2 18 Redis Cluster 7 Alma9 192.168.61.210 9090 Pro…
Po-- 2.29.2 22 MySQL 8.0 Replication 192.168.61.210 9090 Pro…
Po-- 2.29.2 36 MongoDB RepSet 7 192.168.61.210 9090 Pro…
Po-- 2.29.2 37 MongoDB Standalone 7 percona 192.168.61.210 9090 Pro…
Po-- 2.29.2 38 PostgreSQL 16 (streaming) 192.168.61.210 9090 Pro…
Po-- 2.29.2 40 PostgreSQL 17 (logical)_1 192.168.61.210 9090 Pro…
Po-- 2.29.2 41 PostgreSQL 17 (logical)_2 192.168.61.210 9090 Pro…
Migrating ClusterControl users to User Management v2
If you login to ClusterControl GUI v1 using an email address (e.g, [email protected]
), it means that you are still using the legacy ClusterControl user management handled by ClusterControl GUI.
The new user management system (also known as User Management v2), is handled by ClusterControl Controller (cmon), allowing ClusterControl users to communicate directly with the cmon
backend service through various interfaces like command-line interface (ClusterControl CLI), application programming interface (ClusterControl RPC API v2) and Terraform Provider for ClusterControl.
If you login to ClusterControl GUI v1 using a username (e.g, ccadmin
), you can bypass the steps detailed in this section as User Management v2 is already enabled.
Here are the differences in the user interface between legacy user management and user management v2 in ClusterControl GUI v1 (click to enlarge):
Legacy user management | User management v2 |
---|---|
![]() |
![]() |
Unfortunately, the user and group migration must be performed manually, meaning all existing ClusterControl users and groups must be re-created in the User Management v2. If you have a lot of users, we recommend you to use the ClusterControl CLI method. Here are the steps to migrate ClusterControl users to the new user management v2 using ClusterControl CLI.
-
Create a new ClusterControl super-admin user. In this example, we called it "ccadmin". This user must belong in the "admins" group:
s9s user \ --create \ --group=admins \ --generate-key \ --new-password=SuPeRs3cr3tP455 \ --email-address=[email protected] \ --first-name=ClusterControl \ --last-name=Admninistrator \ --batch \ ccadmin
Tip
Instead of
ccadmin
, you may use your own username. Usernameadmin
is not available and reserved for ClusterControl internal usage. -
Create all ClusterControl users and its respective group. In the following example, we created a user called John Doe, with login name "john" and also create a group called "dba", which John Doe will be part of it:
s9s user \ --create \ --group=dba \ --create-group \ --generate-key \ --new-password=s3cr3tP455 \ --email-address=[email protected] \ --first-name=John \ --last-name=Doe \ --batch \ john
Repeat this command for every user. Omit
--create-group
if the group already exists and change the--new-password
value accordingly. -
If you want to create a new group, you must pass a username as one of the member of the group. In this example, we create a group called "sysadmin" and also create a default user for this group called "sysadmin":
-
To change user "john" from group "dba" to another group called "sysadmin":
Migrating LDAP users to v2
Log into ClusterControl GUI using a username from the admins group. This will activate User Management v2 and allow for the LDAP v2 configuration at ClusterControl GUI v1 or v2 → User Management → LDAP → LDAP Settings. Make sure you see the LDAP v2 similar to below:
To migrate LDAP users to v2, re-configure the LDAP Settings accordingly, by using the same credentials and configurations as in the legacy user management together with its group mapping. See LDAP section for details.
Upgrading to v2.1.0 for Legacy Operating Systems
Starting from ClusterControl v2.2.0 (Sept 2024), ClusterControl requires GLIBC 2.27 and later due to major changes in supporting OpenSSL v3 and FIPS 140-2 requirements. This is only offered in the following operating systems:
- Red Hat Enterprise Linux/Rocky Linux/AlmaLinux ≥ 8
- Ubuntu ≥ 20.04
- Debian ≥ 10
We highly recommend upgrading the operating systems to the supported version shown in the Operating System section for long-term support. One method is to prepare a new ClusterControl server on the supported operating system, install the latest ClusterControl (for a new installation, v2 and above, you will only get the new GUI v2), and then import the database clusters into the new ClusterControl v2.
If you want to upgrade ClusterControl running on the legacy operating systems (not recommended, but somehow relevant), the latest ClusterControl packages that can be installed are:
- clustercontrol-controller-2.1.0-10601
- clustercontrol2-2.2.5-1655
- clustercontrol-notifications-2.0.0-344
- clustercontrol-cloud-2.0.0-408
- clustercontrol-clud-2.0.0-408
- clustercontrol-ssh-2.0.0-166
Info
For offline upgrade, kindly pre-download the above packages from Severalnines Download Site.
The following is an example of ClusterControl upgrade on a RHEL-based operating system (CentOS 7):
-
Perform the upgrade to the selected version:
-
Restart the ClusterControl services:
The upgrade is now complete.
Note
If you are upgrading from ClusterControl 1.9.8 and older, you will have two ClusterControl GUI (v1 and v2) and both can be accessible as below:
- ClusterControl GUI v1 (feature-freeze, stays at v1.9.8, provided by package
clustercontrol
):https://<ClusterControl_host>/clustercontrol
. - ClusterControl GUI v2 (latest, version 2.x, provided by package
clustercontrol2
):https://<ClusterControl_host>:9443/
.
Starting from ClusterControl v2, time-series monitoring will be performed by Prometheus, thus you need to go to ClusterControl GUI → choose a cluster → Dashboards → Enable Agent-Based Monitoring for every cluster. ClusterControl will perform Prometheus deployment with the corresponding exporters on all managed nodes.
Activating web-based SSH for ClusterControl GUI v2
Info
Follow the instructions in this section if ClusterControl web-based SSH in ClusterControl GUI v2 is not activated (or not working after an upgrade).
Starting from ClusterControl 1.9.8, ClusterControl GUI v2 is now able to perform a web-based SSH using a web socket. ClusterControl will use the operating system user as configured during the cluster deployment/import and pop up a new browser window to access the SSH terminal via ClusterControl.
Note
The ClusterControl GUI user must have "Manage" privilege to the cluster to use this feature.
To activate the web-based SSH on ClusterControl GUI v2, a modification is required to be performed to the Apache configuration file, to ensure the feature is secure and ClusterControl can exchange HTTP token authentication properly.
-
Ensure you have upgraded to the latest version of the ClusterControl GUI v2 package. The package name and version should be at least clustercontrol2.x86_64 2.1.0-1203.
-
Edit
/etc/httpd/conf.d/cc-frontend.conf
(RedHat-based) or/etc/apache2/conf.d/sites-available/cc-frontend.conf
(Debian-based) and make sure the following lines exist inside the<VirtualHost *:9443>
section.# Proxy settings SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerExpire off SSLProxyCheckPeerName off SSLProxyCACertificateFile /var/lib/cmon/ca/cmon/rpc_tls.crt <LocationMatch /cc-license> ProxyPass https://severalnines.com/service/lic.php ProxyPassReverse https://severalnines.com/service/lic.php </LocationMatch> <LocationMatch /api/v2/> ProxyPass https://127.0.0.1:9501/v2/ ProxyPassReverse https://127.0.0.1:9501/v2/ Header edit Set-Cookie ^(.*)$ "$1; Path=/" </LocationMatch> <LocationMatch /api/events-test/> ProxyPass http://127.0.0.1:9510/test/ ProxyPassReverse http://127.0.0.1:9510/test/ </LocationMatch> <Location /cmon-ssh/cmon/ws/> RewriteEngine On RewriteCond %{REQUEST_URI} ^/cmon-ssh/cmon/ws/(.*)$ RewriteRule ^(.*)$ ws://127.0.0.1:9511/cmon/ws/%1 [P,L] </Location> <LocationMatch /cmon-ssh/> ProxyPass http://127.0.0.1:9511/ ProxyPassReverse http://127.0.0.1:9511/ </LocationMatch>
This is an example after we have updated the configuration file on a Rocky Linux 8 system,
/etc/httpd/conf.d/cc-frontend.conf
:<VirtualHost *:9443> ServerName 157.230.37.193 ServerAlias *.severalnines.local DocumentRoot /var/www/html/clustercontrol2 #ErrorLog /var/log/httpd/cc-frontend-error.log #CustomLog /var/log/httpd/cc-frontend-access.log combined #ErrorLog ${APACHE_LOG_DIR}/cc-frontend-error.log #CustomLog ${APACHE_LOG_DIR}/cc-frontend-access.log combined # HTTP Strict Transport Security (mod_headers is required) (63072000 seconds) Header always set Strict-Transport-Security "max-age=63072000" SSLEngine on SSLCertificateFile /etc/ssl/certs/s9server.crt SSLCertificateKeyFile /etc/ssl/private/s9server.key <Directory /> Options +FollowSymLinks AllowOverride All Require all granted </Directory> <Directory /var/www/html/clustercontrol2> Options +Indexes +Includes +FollowSymLinks -MultiViews AllowOverride All RewriteEngine On # If an existing asset or directory is requested go to it as it is RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d RewriteRule ^ - [L] # If the requested resource doesn't exist, use index.html RewriteRule ^ /index.html </Directory> # Proxy settings SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerExpire off SSLProxyCheckPeerName off SSLProxyCACertificateFile /var/lib/cmon/ca/cmon/rpc_tls.crt <LocationMatch /cc-license> ProxyPass https://severalnines.com/service/lic.php ProxyPassReverse https://severalnines.com/service/lic.php </LocationMatch> <LocationMatch /api/v2/> ProxyPass https://127.0.0.1:9501/v2/ ProxyPassReverse https://127.0.0.1:9501/v2/ Header edit Set-Cookie ^(.*)$ "$1; Path=/" </LocationMatch> <LocationMatch /api/events-test/> ProxyPass http://127.0.0.1:9510/test/ ProxyPassReverse http://127.0.0.1:9510/test/ </LocationMatch> <Location /cmon-ssh/cmon/ws/> RewriteEngine On RewriteCond %{REQUEST_URI} ^/cmon-ssh/cmon/ws/(.*)$ RewriteRule ^(.*)$ ws://127.0.0.1:9511/cmon/ws/%1 [P,L] </Location> <LocationMatch /cmon-ssh/> ProxyPass http://127.0.0.1:9511/ ProxyPassReverse http://127.0.0.1:9511/ </LocationMatch> </VirtualHost> # intermediate configuration SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1 SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 SSLHonorCipherOrder off SSLSessionTickets off # SSLUseStapling On # SSLStaplingCache "shmcb:logs/ssl_stapling(32768)"
-
Restart the Apache service:
-
Finally, you must log out from the ClusterControl GUI v2 and re-login again, to activate the new cookie settings for HTTP token authentication.
You can now use the web SSH feature accessible under Nodes → Actions → SSH Console.