Skip to content

Migrating ClusterControl to Another Server

This guide details the migration process for your ClusterControl instance from one server to another.

Example use cases

Here are real-world reasons to move ClusterControl to a new server:

  • Operating system, hardware refresh or cloud move - Aging host machines, operating systems reaching end-of-life (e.g., Ubuntu 16.04), or moving to a new cloud provider, region, or VPC. Migrating ClusterControl ensures it runs on a supported, faster, or standardized platform.

  • Capacity and performance scaling - If the current ClusterControl server is insufficient for the number of managed clusters, nodes, or metrics, a move to a larger instance with more CPU, RAM, or SSD can significantly improve the speed of backups, reporting, and dashboards.

  • Security and compliance re-segmentation - You need the management plane in a hardened subnet, behind a new firewall model, or separating it from production networks to meet compliance standards (e.g., PCI/GDPR/SOX). Migrating ClusterControl lets you change IP addresses, rotate keys, and enforce new controls.

Prerequisites

Make sure the following conditions are met:

  • You must be running the latest ClusterControl available at the present time, with Prometheus agent-based monitoring enabled. Upgrade the existing ClusterControl host to the latest version available. See Upgrading ClusterControl.
  • The new ClusterControl server must have equal or bigger capacity in terms of resources like CPU, RAM or disk space. Otherwise, there is a risk of CMON database can't be started up due to InnoDB buffer pool initialization failure.
  • It is highly recommended to use the same operating system distribution family as the old ClusterControl. For example, if the ClusterControl host is running on CentOS 7, use Rocky Linux 9 as the operating system on the new host.
  • Prepare a list of database nodes that require changing of the grant and privileges after ClusterControl IP address has changed. You can get all the nodes under ClusterControl → Nodes section.

Migration steps

The following recommended steps will install the latest version of ClusterControl and migrate your ClusterControl settings and historical data on the new server. The steps include:

  1. Install ClusterControl and Prometheus on the new server.
  2. Export the cmon database, ClusterControl and Prometheus configuration and data files.
  3. Replace any occurrences of the old IP address to the new IP address.
  4. Restore the cmon database.
  5. Update the cmon user with the new IP address on all the database nodes.
  6. Migrate the SSH key.
  7. Start services.

Info

Your database clusters are expected to remain unaffected; however, the migration steps will require downtime for ClusterControl.

Step 1: Install ClusterControl

Install ClusterControl on the new server using the installer script. This will automatically install and configure all ClusterControl’s packages and dependencies on the new server.

On the ClusterControl server, run the following commands:

wget https://severalnines.com/downloads/cmon/install-cc
chmod +x install-cc
sudo ./install-cc     # omit sudo if you run as root

If the new server is running in an air-gapped environment, you will need to install ClusterControl using offline installation. See more details here Offline Installation.

Note

In the installation wizard, ensure that the MySQL root and cmon passwords match those of the old server. Verify inside /etc/cmon.cnf.

Once the installation is done, stop ClusterControl services on the old server and the new server to get consistent data, also stop Prometheus on the old server:

# both new and old servers
systemctl stop cmon cmon-cloud cmon-events cmon-ssh
systemctl disable cmon
# old server
systemctl stop prometheus

Step 2: Install Prometheus on the new server

ClusterControl automatically installs Prometheus to store time-series monitoring data when the first cluster is added. Since a new ClusterControl host doesn't have an existing cluster, Prometheus won't be installed or running. Therefore, a manual Prometheus installation is required before importing monitoring data from the old server. This step can be skipped if you prefer to start with fresh monitoring data after the ClusterControl migration.

On the new server, download the latest version of Prometheus at https://prometheus.io/download/, install Prometheus binaries and create the operating system user:

# new server
# note: the version might change in the future
wget https://github.com/prometheus/prometheus/releases/download/v3.7.2/prometheus-3.7.2.linux-amd64.tar.gz
tar -xzf prometheus-*.tar.gz
cd prometheus-3.7.2.linux-amd64
cp prometheus /usr/local/bin/
cp promtool /usr/local/bin/
useradd --no-create-home --shell /bin/false prometheus

Step 3: Export ClusterControl database

On the old server, take a MySQL dump of the cmon database from the server using --replace and --no-create-info flags:

# old server
mysqldump -uroot -p --databases cmon --replace --no-create-info > /root/cmondb_backup.sql 

Once the backup is done, copy the MySQL dump file to the new ClusterControl server:

# old server
scp /root/cmondb_backup.sql root@{new_server}:/root/

Step 4: Copy ClusterControl and Prometheus data files

On the old server, copy the ClusterControl controller configuration files, data file, Prometheus configuration and data files to the new server. It is recommended to back up the existing ClusterControl data files and configuration files on the new server beforehand:

# backup CMON data on new server
mkdir -p /root/cc-backup
cp /etc/cmon.cnf /root/cc-backup/
cp -pfR /etc/cmon.d /root/cc-backup/
cp -pfR /var/lib/cmon /root/cc-backup/
cp /etc/s9s.conf /root/cc-backup/

# old server
# cmon.cnf - ClusterControl main config
scp /etc/cmon.cnf root@{new_server}:/etc/
# /etc/cmon.d/cmon*.cnf - Cluster config
scp -r /etc/cmon.d/* root@{new_server}:/etc/cmon.d/
# /var/lib/cmon - ClusterControl data files
scp -r /var/lib/cmon/* root@{new_server}:/var/lib/cmon/
# /etc/s9s.conf - ClusterControl CLI config
scp /etc/s9s.conf root@{new_server}:/etc/
# depending on the local user you configure to access CLI
scp ~/.s9s {user}@{new_server}:~/
# Prometheus-related data
scp -r /etc/prometheus root@{new_server}:/etc/
scp /etc/systemd/system/prometheus.service root@{new_server}:/etc/systemd/system/
scp -r /var/lib/prometheus root@{new_server}:/var/lib/

Step 5: Replace any occurrences of the old IP address

On the new server, replace all occurrences of the old IP address in the ClusterControl controller configuration files, database dump file and Prometheus configuration file. Run these commands:

# new server
sed -i 's|old_ip|new_ip|g' /etc/cmon.cnf
sed -i 's|old_ip|new_ip|g' /etc/cmon.d/cmon*.cnf
sed -i 's|old_ip|new_ip|g' /root/cmondb_backup.sql
sed -i 's|old_ip|new_ip|g' /etc/prometheus/prometheus.yml
Example
sed -i 's|192.168.99.11|192.168.99.201|g' /etc/cmon.cnf
sed -i 's|192.168.99.11|192.168.99.201|g' /etc/cmon.d/cmon*.cnf
sed -i 's|192.168.99.11|192.168.99.201|g' /root/cmondb_backup.sql
sed -i 's|192.168.99.11|192.168.99.201|g' /etc/prometheus/prometheus.yml

Step 6: Restore ClusterControl database

On the new server, restore the database backup onto the database server:

mysql -uroot -p < /root/cmondb_backup.sql

Step 7: Update the Clustercontrol user on all the database nodes

To allow the new ClusterControl server to manage the existing database nodes, update the cmon user with the new IP address on all database nodes managed by the old ClusterControl host:

For MySQL, only execute the following command on:

  • Primary node for primary-replica replication and group replication cluster.
  • One of the database nodes for a Galera cluster.
-- execute on the primary database node only
RENAME USER 'cmon'@'{old_ip}' TO 'cmon'@'{new_ip}';
Example
SELECT user,host FROM mysql.user WHERE user = 'cmon'; -- before
RENAME USER 'cmon'@'192.168.99.11' TO 'cmon'@'192.168.99.201';
SELECT user,host FROM mysql.user WHERE user = 'cmon'; -- after

Example for PostgreSQL 17, where the pg_hba.conf is located at /etc/postgresql/17/main/pg_hba.conf:

# update the IP address
sed -i 's|old_ip|new_ip|g' /etc/postgresql/17/main/pg_hba.conf
# reload the pg_hba
psql> SELECT pg_reload_conf();
Example
$ sed -i 's|192.168.99.11|192.168.99.201|g' /etc/postgresql/17/main/pg_hba.conf
$ su - postgres
$ psql
psql> SELECT pg_reload_conf();

Step 8: Migrate SSH key

Copy the SSH private key used by the old server to the new server. If you use root user, it is commonly located at /root/.ssh/id_rsa. For sudo user, it is usually located at /home/{user}/.ssh/id_rsa.

# old server
scp ~/.ssh/id_rsa {user}@{new_server}:~/.ssh/id_rsa
Example
# if you use root user to manage the cluster
$ scp ~/.ssh/id_rsa [email protected]:~/.ssh/
# if you use sudo user to manage the cluster
$ scp ~/.ssh/id_rsa [email protected]:~/.ssh/

If you choose not to use the same SSH key, you may generate a new key but use the same key path as the old server. For example, generate a new SSH private key and swap it with the existing key file located at /root/.ssh/id_rsa. Then set up SSH key-based authentication from ClusterControl server to all database nodes as shown in Requirements → SSH key-based authentication.

Step 9: Start services

Once you have done all the steps, set the correct ownership of the Prometheus directories, start and enable Prometheus and ClusterControl services:

chown -Rf prometheus:prometheus /etc/prometheus
chown -Rf prometheus:prometheus /var/lib/prometheus
systemctl enable prometheus
systemctl enable cmon
systemctl start prometheus
systemctl start cmon cmon-cloud cmon-events cmon-ssh
Then open a web browser and go to https://<new_ClusterControl_host>/ and login using your exisiting admin username and password on the welcome page.

Attention

If you store your backups on the ClusterControl node, you have to manually transfer them to the exact location on the new ClusterControl server.

Post-migration checklist

Make sure after the migration, the following aspects are working as expected:

Checked? Aspect Verification action
Database cluster state. Go to ClusterControl GUI → Home and make sure all clusters in the correct state.
If ClusterControl host is the backup destination, backup files are transferred to the new server. Go to ClusterControl GUI → Backups and make sure the database backups are correctly listed.
ClusterControl CLI is able to communicate with the ClusterControl controller. ClusterControl CLI: s9s cluster --list --long and make sure you see a list of cluster, similar to the GUI.
Monitoring dashboards are working. Go to ClusterControl GUI → Clusters → choose a cluster → Dashboards and make sure the recent monitoring data are populated. Otherwise, fix it with ClusterControl GUI → Clusters → choose a cluster → Dashboards → More → Re-enable agent based monitoring.