1. Home
  2. Docs
  3. ClusterControl
  4. Administration
  5. Upgrading ClusterControl

Upgrading ClusterControl

There are several ways to upgrade ClusterControl to the latest version. However, we recommend users to perform an Online Upgrade where the instructions are mostly up-to-date.

Online Upgrade

This is the recommended way to upgrade ClusterControl. The following upgrade procedures require an internet connection on the ClusterControl node.

Attention

The latest ClusterControl v2.2.0 (Sept 2024) is only compatible with GLIBC 2.27 and later (available in RHEL/Rocky Linux/AlmaLinux >8, Ubuntu >20.04, Debian >10), due to major changes in supporting OpenSSL v3 and FIPS 140-2 requirements.

Having said that, you can’t install ClusterControl v2.2.0 on operating systems that use an older GLIBC version, for example, CentOS/RHEL 7 or Debian 9. The highest it can go is v2.1.0. See Upgrading to v2.1.0 for Legacy Operating Systems.

Red Hat/CentOS/Rocky Linux/AlmaLinux version 8/9Debian >10/Ubuntu >20.04

1) Clear the repository cache so it will retrieve the latest repository list and perform the upgrade:

For yum:

yum clean all
yum install clustercontrol-controller \
  clustercontrol2 \
  clustercontrol-ssh \
  clustercontrol-notifications \
  clustercontrol-cloud \
  clustercontrol-clud \
  s9s-tools

For dnf:

dnf clean all
dnf install clustercontrol-controller \ 
  clustercontrol2 \ 
  clustercontrol-ssh \ 
  clustercontrol-notifications \ 
  clustercontrol-cloud \ 
  clustercontrol-clud \ 
  s9s-tools

2) Restart the ClusterControl services:

For sysvinit/upstart:

service cmon restart
service cmon-ssh restart
service cmon-events restart
service cmon-cloud restart

For systemd:

systemctl daemon-reload
systemctl restart cmon cmon-ssh cmon-events cmon-cloud

The upgrade is now complete.

Note

If you are upgrading from ClusterControl 1.9.8 and older, you will have two ClusterControl GUI (v1 and v2) and both can be accessible as below:

  • ClusterControl GUI v1 (feature-freeze, stays at v1.9.8, provided by package clustercontrol): https://{ClusterControl_IP_address_or_hostname}/clustercontrol
  • ClusterControl GUI v2 (latest, version 2.x, provided by package clustercontrol2): https://{ClusterControl_IP_address_or_hostname}:9443/

Starting from ClusterControl v2, time-series monitoring will be performed by Prometheus, thus you need to go to ClusterControl GUI → choose a cluster → Dashboards Enable Agent-Based Monitoring for every cluster. ClusterControl will perform Prometheus deployment with the corresponding exporters on all managed nodes.

Verify the new version from the GUI’s footer or by using the command cmon -v. You should re-login if your ClusterControl UI session is active.

1) Update the repository list and perform the upgrade:

sudo apt-get update
sudo apt-get install clustercontrol-controller \
  clustercontrol2 \
  clustercontrol-ssh \
  clustercontrol-notifications \
  clustercontrol-cloud \
  clustercontrol-clud \
  s9s-tools

2) Restart the ClusterControl services:

For sysvinit/upstart:

service cmon restart
service cmon-ssh restart
service cmon-events restart
service cmon-cloud restart

For systemd:

systemctl daemon-reload
systemctl restart cmon cmon-ssh cmon-events cmon-cloud

The upgrade is now complete.

Note

If you are upgrading from ClusterControl 1.9.8 and older, you will have two ClusterControl GUI (v1 and v2) and both can be accessible as below:

  • ClusterControl GUI v1 (feature-freeze, stays at v1.9.8, provided by package clustercontrol): https://{ClusterControl_IP_address_or_hostname}/clustercontrol
  • ClusterControl GUI v2 (latest, version 2.x, provided by package clustercontrol2): https://{ClusterControl_IP_address_or_hostname}:9443/

Starting from ClusterControl v2, time-series monitoring will be performed by Prometheus, thus you need to go to ClusterControl GUI → choose a cluster → Dashboards Enable Agent-Based Monitoring for every cluster. ClusterControl will perform Prometheus deployment with the corresponding exporters on all managed nodes.

Verify the new version from the GUI’s footer or by using the command cmon -v. You should re-login if your ClusterControl UI session is active.

Offline Upgrade

The following upgrade procedures can be performed without an internet connection on the ClusterControl node.

Attention

The latest ClusterControl v2.2.0 (Sept 2024) is only compatible with GLIBC 2.27 and later (available in RHEL/Rocky Linux/AlmaLinux >8, Ubuntu >20.04, Debian >10), due to major changes in supporting OpenSSL v3 and FIPS 140-2 requirements.

Having said that, you can’t install ClusterControl v2.2.0 on operating systems that use an older GLIBC version, for example, CentOS/RHEL 7 or Debian 9. The highest it can go is v2.1.0. See Upgrading to v2.1.0 for Legacy Operating Systems.

Red Hat/CentOS/Rocky Linux/AlmaLinuxDebian/Ubuntu

1) Download the latest version of ClusterControl-related RPM packages from the Severalnines download site and the Severalnines Repository. There are some packages you need to download as explained below:

  • clustercontrol – ClusterControl UI
  • clustercontrol2 – ClusterControl UI v2
  • clustercontrol-controller – ClusterControl Controller (CMON)
  • clustercontrol-notifications – ClusterControl event module
  • clustercontrol-ssh – ClusterControl web-ssh module
  • clustercontrol-cloud – ClusterControl cloud module
  • clustercontrol-clud – ClusterControl cloud’s file manager module
  • s9s-tools – ClusterControl CLI (s9s) – Download it from here.

2) Install using yum to satisfy all dependencies:

yum localinstall clustercontrol*
yum localinstall s9s-tools*

3) Restart the ClusterControl services:

For sysvinit/upstart:

service cmon restart
service cmon-ssh restart
service cmon-events restart
service cmon-cloud restart

For systemd:

systemctl daemon-reload
systemctl restart cmon cmon-ssh cmon-events cmon-cloud

The upgrade is now complete.

Note

If you are upgrading from ClusterControl 1.9.8 and older, you will have two ClusterControl GUI (v1 and v2) and both can be accessible as below:

  • ClusterControl GUI v1 (feature-freeze, stays at v1.9.8, provided by package clustercontrol): https://{ClusterControl_IP_address_or_hostname}/clustercontrol
  • ClusterControl GUI v2 (latest, version 2.x, provided by package clustercontrol2): https://{ClusterControl_IP_address_or_hostname}:9443/

Starting from ClusterControl v2, time-series monitoring will be performed by Prometheus, thus you need to go to ClusterControl GUI → choose a cluster → Dashboards Enable Agent-Based Monitoring for every cluster. ClusterControl will perform Prometheus deployment with the corresponding exporters on all managed nodes.

Verify the new version from the GUI’s footer or by using the command cmon -v. You should re-login if your ClusterControl UI session is active.

Upgrading to v2.1.0 for Legacy Operating Systems

Starting from ClusterControl v2.2.0 (Sept 2024), ClusterControl requires GLIBC 2.27 and later due to major changes in supporting OpenSSL v3 and FIPS 140-2 requirements. This is only offered in the following operating systems:

  • Red Hat Enterprise Linux/Rocky Linux/AlmaLinux >8
  • Ubuntu >20.04
  • Debian >10

We highly recommend upgrading the operating systems to the supported version shown in the Operating System section for long-term support. One method is to prepare a new ClusterControl server on the supported operating system, install the latest ClusterControl (for a new installation, v2 and above, you will only get the new GUI v2), and then import the database clusters into the new ClusterControl v2.2.0.

If you want to upgrade ClusterControl running on the legacy operating systems (not recommended, but somehow relevant), the latest ClusterControl packages that can be installed are:

  • clustercontrol-controller-2.1.0-10601
  • clustercontrol2-2.2.5-1655
  • clustercontrol-notifications-2.0.0-344
  • clustercontrol-cloud-2.0.0-408
  • clustercontrol-clud-2.0.0-408
  • clustercontrol-ssh-2.0.0-166

The following is an example of a RHEL-based operating system:

1) Perform the upgrade to the selected version:

yum install clustercontrol-controller-2.1.0-10601 \
  clustercontrol2-2.2.5-1655 \
  clustercontrol-notifications-2.0.0-344 \
  clustercontrol-cloud-2.0.0-408 \
  clustercontrol-clud-2.0.0-408 \
  clustercontrol-ssh-2.0.0-166 \
  s9s-tools

2) Restart the ClusterControl services:

systemctl restart cmon cmon-ssh cmon-events cmon-cloud

The upgrade is now complete.

Note

If you are upgrading from ClusterControl 1.9.8 and older, you will have two ClusterControl GUI (v1 and v2) and both can be accessible as below:

  • ClusterControl GUI v1 (feature-freeze, stays at v1.9.8, provided by package clustercontrol): https://{ClusterControl_IP_address_or_hostname}/clustercontrol
  • ClusterControl GUI v2 (latest, version 2.x, provided by package clustercontrol2): https://{ClusterControl_IP_address_or_hostname}:9443/

Starting from ClusterControl v2, time-series monitoring will be performed by Prometheus, thus you need to go to ClusterControl GUI → choose a cluster → Dashboards Enable Agent-Based Monitoring for every cluster. ClusterControl will perform Prometheus deployment with the corresponding exporters on all managed nodes.

Activating Web-based SSH for ClusterControl GUI v2

Starting from ClusterControl 1.9.8, ClusterControl GUI v2 is now able to perform a web-based SSH using a web socket. ClusterControl will use the operating system user as configured during the cluster deployment/import and pop up a new browser window to access the SSH terminal via ClusterControl.

Note

The ClusterControl GUI user must have “Manage” privilege to the cluster to use this feature.

To activate the web-based SSH on ClusterControl GUI v2, a modification is required to be performed to the Apache configuration file, to ensure the feature is secure and ClusterControl can exchange HTTP token authentication properly.

1. Ensure you have upgraded to the latest version of the ClusterControl GUI v2 package. The package name and version should be at least clustercontrol2.x86_64 2.1.0-1203.

2. Edit /etc/httpd/conf.d/cc-frontend.conf (RedHat-based) or /etc/apache2/conf.d/sites-available/cc-frontend.conf (Debian-based) and make sure the following lines exist inside the <VirtualHost *:9443> section.

        # Proxy settings
        SSLProxyEngine on
        SSLProxyVerify none
        SSLProxyCheckPeerCN off
        SSLProxyCheckPeerExpire off
        SSLProxyCheckPeerName off
        SSLProxyCACertificateFile /var/lib/cmon/ca/cmon/rpc_tls.crt

        <LocationMatch /cc-license>
            ProxyPass https://severalnines.com/service/lic.php
            ProxyPassReverse https://severalnines.com/service/lic.php
        </LocationMatch>

        <LocationMatch /api/v2/>
            ProxyPass https://127.0.0.1:9501/v2/
            ProxyPassReverse https://127.0.0.1:9501/v2/
            Header edit Set-Cookie ^(.*)$ "$1; Path=/"
        </LocationMatch>

        <LocationMatch /api/events-test/>
            ProxyPass http://127.0.0.1:9510/test/
            ProxyPassReverse http://127.0.0.1:9510/test/
        </LocationMatch>


        <Location /cmon-ssh/cmon/ws/>
            RewriteEngine On
            RewriteCond %{REQUEST_URI} ^/cmon-ssh/cmon/ws/(.*)$
            RewriteRule ^(.*)$ ws://127.0.0.1:9511/cmon/ws/%1 [P,L]
        </Location>

        <LocationMatch /cmon-ssh/>
            ProxyPass http://127.0.0.1:9511/
            ProxyPassReverse http://127.0.0.1:9511/
        </LocationMatch>

This is an example after we have updated the configuration file on a Rocky Linux 8 system, /etc/httpd/conf.d/cc-frontend.conf:

<VirtualHost *:9443>
        ServerName 157.230.37.193
        ServerAlias *.severalnines.local

        DocumentRoot /var/www/html/clustercontrol2
        #ErrorLog /var/log/httpd/cc-frontend-error.log
        #CustomLog /var/log/httpd/cc-frontend-access.log combined
        #ErrorLog ${APACHE_LOG_DIR}/cc-frontend-error.log
        #CustomLog ${APACHE_LOG_DIR}/cc-frontend-access.log combined

        # HTTP Strict Transport Security (mod_headers is required) (63072000 seconds)
        Header always set Strict-Transport-Security "max-age=63072000"

        SSLEngine on
        SSLCertificateFile /etc/ssl/certs/s9server.crt
        SSLCertificateKeyFile /etc/ssl/private/s9server.key

        <Directory />
                Options +FollowSymLinks
                AllowOverride All
                Require all granted
        </Directory>

        <Directory /var/www/html/clustercontrol2>
                Options +Indexes +Includes +FollowSymLinks -MultiViews
                AllowOverride All

                RewriteEngine On
                # If an existing asset or directory is requested go to it as it is
                RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR]
                RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d
                RewriteRule ^ - [L]
                # If the requested resource doesn't exist, use index.html
                RewriteRule ^ /index.html
        </Directory>

        # Proxy settings
        SSLProxyEngine on
        SSLProxyVerify none
        SSLProxyCheckPeerCN off
        SSLProxyCheckPeerExpire off
        SSLProxyCheckPeerName off
        SSLProxyCACertificateFile /var/lib/cmon/ca/cmon/rpc_tls.crt

        <LocationMatch /cc-license>
            ProxyPass https://severalnines.com/service/lic.php
            ProxyPassReverse https://severalnines.com/service/lic.php
        </LocationMatch>

        <LocationMatch /api/v2/>
            ProxyPass https://127.0.0.1:9501/v2/
            ProxyPassReverse https://127.0.0.1:9501/v2/
            Header edit Set-Cookie ^(.*)$ "$1; Path=/"
        </LocationMatch>

        <LocationMatch /api/events-test/>
            ProxyPass http://127.0.0.1:9510/test/
            ProxyPassReverse http://127.0.0.1:9510/test/
        </LocationMatch>


        <Location /cmon-ssh/cmon/ws/>
            RewriteEngine On
            RewriteCond %{REQUEST_URI} ^/cmon-ssh/cmon/ws/(.*)$
            RewriteRule ^(.*)$ ws://127.0.0.1:9511/cmon/ws/%1 [P,L]
        </Location>

        <LocationMatch /cmon-ssh/>
            ProxyPass http://127.0.0.1:9511/
            ProxyPassReverse http://127.0.0.1:9511/
        </LocationMatch>



</VirtualHost>
# intermediate configuration
SSLProtocol             all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite          ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
SSLHonorCipherOrder     off
SSLSessionTickets       off

# SSLUseStapling On
# SSLStaplingCache "shmcb:logs/ssl_stapling(32768)"

3. Restart the Apache service:

For RHEL-based server:

systemctl restart httpd

For Debian-based server:

systemctl restart apache2

4. Finally, you must log out from the ClusterControl GUI v2 and re-login again, to activate the new cookie settings for HTTP token authentication.

You can now use the web SSH feature accessible under NodesActionsSSH Console.

Was this article helpful to you? Yes 1 No 2