Skip to content

Migrating Existing Clusters from ClusterControl v1 to ClusterControl v2

In this article, we explore the procedure for migrating existing database clusters from the ClusterControl GUI v1 to the new ClusterControl GUI v2. This migration strategy is designed to ensure continuous operational integrity and minimize downtime throughout the transition. This approach is particularly advantageous for organizations and users who prefer a phased adoption strategy, allowing for thorough testing and validation of the new ClusterControl v2 interface and its enhanced functionalities in a controlled environment before a full-scale deployment. By following these guidelines, users can seamlessly transition their critical database infrastructure, leveraging the advancements offered by ClusterControl v2 while maintaining stability and performance.

Limitations of This Method

Before proceeding, it's important to understand the key limitations:

  • All operations that depend on Cluster UUIDs may be impacted due to the change of the UUIDs.
  • You begin with new monitoring data as historical metrics and logs are not migrated.
  • Manual tuning may be required for certain settings after migration (e.g., failover thresholds).
  • Custom scripts and hooks from GUI v1 require manual recreation, as they will not be imported automatically.
  • No rollback. Reverting to GUI v1 after migration and disabling it is not straightforward.

Pre-Migration Checks

Thorough pre-migration checks are essential for a seamless and successful migration. They mitigate risks, prevent data loss, minimize downtime, and guarantee system integrity, averting complications, delays, increased costs, and disruptions:

  • In ClusterControl GUI v1, verify the health and stability of your clusters.
  • Review custom alerting, operational reports schedules, and backup schedules.
  • Make a list of integrations: email, LDAP, cloud backups, notifications, 3rd party integrations etc.
  • Make a list of users who log in to ClusterControl GUI v1.
  • Ensure SSH access to all nodes is functional for the user running ClusterControl in your new environment with ClusterControl GUI v2.

Installing ClusterControl v2

Follow these steps:

  1. Prepare a new host or virtual machine.

  2. Download and install ClusterControl GUI v2 from the official installer or repository. On the ClusterControl server, run the following commands:

    wget https://severalnines.com/downloads/cmon/install-cc
    chmod +x install-cc
    sudo ./install-cc # omit sudo if you run as root
    
  3. Configure passwordless SSH access to the database nodes. See SSH Key-based Authentication.

Importing Existing Clusters into ClusterControl v2

The Import Database Cluster feature in ClusterControl enables users to bring existing database clusters under the existing ClusterControl v1 management with minimal effort. The following cluster types can be imported from ClusterControl v1 to v2:

  • MySQL/MariaDB/Percona Server standalone
  • MySQL/MariaDB/Percona Server replication
  • MariaDB Galera Cluster
  • Percona XtraDB Cluster
  • PostgreSQL streaming replication
  • TimescaleDB streaming replication
  • MongoDB replica set
  • MongoDB sharded cluster

The rest of the supported cluster types like Redis/Valkey (Sentinel and Cluster), PostgreSQL logical replication, SQL Server and Elasticsearch are only available on ClusterControl v2.

Prerequisites

Before importing a database cluster, several prerequisites must be met:

  1. Passwordless SSH (SSH using key-based authentication) is configured from the new ClusterControl v2 node to all database nodes. See SSH Key-based Authentication.
  2. The target cluster must not be in a degraded state. For example, if you have a three-node Galera cluster, all nodes must be alive, accessible, and in sync.

Below are examples for each supported database type:

Importing a MySQL/MariaDB standalone or replication

For this to work, ClusterControl expects all instances within a group to use the same MySQL root password. It will also automatically try to identify each server's role (primary, replica, multi, or standalone).

  1. To import an existing MySQL replication cluster, go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "MySQL Replication".

  2. Under Cluster details, specify the cluster details you want to assign:

    • Name: This is optional. This shall be the name of your cluster. Once imported, ClusterControl will use this as its registry name of the cluster.
    • Tags: Add tags to search or group your database clusters.
  3. Click Continue.

  4. Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:

    • SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
    • SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
    • SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
    • SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
    • SSH sudo / OS elevation command: The OS elevation command (sudo, doas, pbrun) that ClusterControl will uses for non-root account.
  5. Click Continue to proceed to the next step.

  6. Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:

    • Server port: The database server port that ClusterControl will use to connect to all database nodes.
    • Server data directory: The database server data directory path.
    • Admin/Root user: The database admin username that ClusterControl will use to connect to all database nodes. This user must be granted global SUPER privilege and the GRANT option for localhost only.
    • Admin/Root password: The password for Admin/Root user.
    • Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
    • information_schema queries: By default, this is set to off. When enabled, this shall allow cmon or ClusterControl to query from information_schema to collect metrics from your MySQL database nodes.
    • Cluster auto-recovery: By default, this is set to off. If toggled on, the cluster auto-recovery will be enabled allowing for degraded or failing cluster will be attempted to be recovered by ClusterControl.
    • Node auto-recovery: By default, this is set to off. If toggled on, the node auto-recovery will be enabled allowing for failing nodes such as accidentally terminated system process will be tried to be started to get back to the cluster.
  7. Click Continue to proceed to the next step.

  8. Under the Add nodes section, you can specify your existing target database nodes and you are allowed to add more than one node depending on your existing MySQL Replication cluster you need to register to ClusterControl. Since this is an existing database cluster which means it is already setup, ClusterControl will be able to identify and scan the type of topology for your database cluster.

    • Node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

      Note

      You can only proceed to the next step if all of the specified nodes are reachable (shown in green).

  9. Click Continue to proceed to the Preview page. In this section, you can see the summary of your import and if everything is correct, you may proceed to import the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The settings will be kept until you exit the import wizard.

  10. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • Import a three-node Oracle MySQL Replication 8.0 cluster, with operating system user "ubuntu". Regardless of the order, ClusterControl will be able to identify the primary node(s) through read_only and super_read_only variables:

    s9s cluster --register \
        --cluster-type=mysqlreplication \
        --nodes="10.0.10.20;10.0.10.21;10.0.10.22" \
        --vendor=oracle \
        --provider-version=8.0 \
        --db-admin-passwd='mYpa$$word' \
        --os-user=ubuntu \
        --os-key-file=/home/ubuntu/.ssh/id_rsa \
        --cluster-name='PROD - MySQL Replication 8.0' \
        --wait \
        --log
    

Importing a PostgreSQL/TimescaleDB cluster

For this to work, ClusterControl expects all instances within a group to use the same database administrator password. It will also automatically discover the database role of each node (primary, replica, TimescaleDB extension).

  1. To import an existing PostgreSQL streaming replication, go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "PostgreSQL Streaming" for PostgreSQL or "TimescaleDB" for TimescaleDB.

  2. Under Cluster details, specify the cluster details you want to assign:

    • Name: This is optional. This shall be the name of your cluster.
    • Tags: Add tags to search or group your database clusters.
  3. Click Continue.

  4. Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:

    • SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
    • SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
    • SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
    • SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
    • SSH sudo / OS elevation command: The OS elevation command (sudo, doas, pbrun) that ClusterControl will uses for non-root account.
  5. Click Continue to proceed to the next step.

  6. Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:

    • Server port: The database server port that ClusterControl will use to connect to all database nodes.
    • User: The database admin username that ClusterControl will use to connect to all database nodes. This user must be a global super user.
    • Password: The password for User.
    • Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
  7. Click Continue to proceed to the next step.

  8. Under the Add nodes section, you can specify the target database nodes and configure the database topology that you want to import:

    • Primary node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. Only one primary node is allowed for single-primary replication.
    • Replica nodes: Specify the IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. You can specify zero or more replica nodes.

    Note

    You can only proceed to the next step if all of the specified nodes are reachable (shown in green).

    Note

    If you have more than one secondary or replica nodes, make sure to add more under Replica nodes text field.

  9. Click Continue to proceed to the Preview page. In this section, you can see the summary of your import and if everything is correct, you may proceed to import the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The settings will be kept until you exit the import wizard.

  10. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • Import a three-node PostgreSQL 16 streaming replication, with operating system user "ubuntu" (the first node is the primary) with tags production and postgres. Then wait until the job is done while showing logs during the deployment process:

    s9s cluster --register \
        --cluster-type=postgresql \
        --nodes="10.10.10.11;10.10.10.12;10.10.10.13" \
        --vendor=postgresql \
        --provider-version=16 \
        --db-admin='postgres' \
        --db-admin-passwd='mYpa$$word' \
        --os-user=ubuntu \
        --os-key-file=/home/ubuntu/.ssh/id_rsa \
        --with-tags='production;postgres' \
        --cluster-name='PostgreSQL 16 - Streaming Replication' \
        --wait --log
    

Importing a MongoDB replica set

For this to work, ClusterControl expects all instances within a group to use the same MongoDB admin user password. It will also automatically discover the database role of each node (mongos, config server, shard server, primary, secondary, arbiter).

  1. To import an existing MongoDB replica set, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "MongoDB ReplicaSet".

  2. Under Cluster details, specify the cluster details you want to assign:

    • Vendor: This is under Vendor and version label in the panel. This is the kind of vendor that your database cluster binaries for MongoDB ReplicaSet is deployed. This is required and you need to choose from Percona, MongoDB, and MongoDB Enterprise.
    • Name: This is optional. This shall be the name of your cluster.
    • Tags: Add tags to search or group your database clusters.
  3. Click Continue.

  4. Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:

    • SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
    • SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
    • SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
    • SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
    • SSH sudo / OS elevation command: The OS elevation command (sudo, doas, pbrun) that ClusterControl will uses for non-root account.
  5. Click Continue to proceed to the next step.

  6. Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:

    • Server port: The database server port that ClusterControl will use to connect to all database nodes.
    • User: The database admin username that ClusterControl will use to connect to all database nodes. This should have global superuser role (preferably root) and allowed for localhost only.
    • Password: The password for User.
    • MongoDB Auth DB: The authentication database to authenticate to.
    • Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
  7. Click Continue to proceed to the next step.

  8. Under the Add nodes section, you can specify the target database nodes and configure the database topology that you want to import:

    • Node: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

    Note

    You can only proceed to the next step if all of the specified nodes are reachable (shown in green).

  9. Click Continue to proceed to the Preview page. In this section, you can see the summary of your import and if everything is correct, you may proceed to import the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The settings will be kept until you exit the import wizard.

  10. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • To import a three-node MongoDB Replica Set using Percona binaries, you only need to specify the primary node. For example, if you have primary running on 192.168.40.190 and you have replica running on 192.168.40.191, there is no need to specify the replica as it will be auto-detected and will be imported by ClusterControl as well. The following command demonstrates this using ubuntu as its OS username, using ed25519 digital signature algorihtm and print the logs of the job deployment:

    s9s cluster --register \
        --cluster-type=mongodb \
        --nodes="192.168.40.190" \
        --vendor=percona \
        --provider-version='7.0' \
        --os-user=ubuntu \
        --os-key-file=/home/ubuntu/.ssh/id_ed25519 \
        --db-admin='mydbuser' \
        --db-admin-passwd='mydbPassw0rd' \
        --cluster-name='MongoDB Percona Server ReplicaSet 7.0' \
        --wait --log
    
  • To import a three-node MongoDB Enterprise Replica Set 7.0, with operating system user "root" and let the deployment job run in the foreground:

    s9s cluster --register \
        --cluster-type=mongodbenterprise \
        --nodes="192.168.40.155" \
        --vendor=mongodbenterprise \
        --provider-version='7.0' \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --db-admin='mydbuser' \
        --db-admin-passwd='mydbPassw0rd' \
        --cluster-name='MongoDB Enterprise ReplicaSet 7.0'
    
  • To import a MongoDB Sharded Cluster using Percona binaries consisting of 3-node router nodes using ubuntu as its OS username, using ed25519 digital signature algorihtm and print the logs of the job deployment:

    s9s cluster --register \
        --cluster-type=mongodb \
        --nodes="mongos://192.168.1.11;mongos://192.168.1.12;mongos://192.168.1.12;"
        --vendor=percona \
        --provider-version='7.0' \
        --os-user=ubuntu \
        --os-key-file=/home/ubuntu/.ssh/id_ed25519 \
        --db-admin='mydbuser' \
        --db-admin-passwd='mydbPassw0rd' \
        --cluster-name='MongoDB Percona Server Shards 7.0' \
        --wait --log
    

Importing Configurations from ClusterControl v1

While cluster monitoring is imported during the steps above, the following configurations must be manually migrated or re-created in ClusterControl v2:

Users and Access

  • Re-create ClusterControl admin/operator accounts and groups:

    To create a user, go to Create user or team → Create user and fill up all the following information:

    Field Description
    Details
    First name The first name of the user.
    Last name The last name of the user.
    Username The username of the user. In CC v2, the username shall be used when logging in to the login page, in contrast to CC v1 (old GUI) where you had to use e-mail address and password combination in order to login.
    Password The user's password.
    Email Email address of the user. This should be a valid email address (despite not mandatory), for password reset and receiving alerts via email.
    Timezone Choose a timezone from the dropdown list. This timezone will reflect the date/time presentation of the ClusterControl events and monitoring data points.
    Team
    Team Choose a team from the dropdown. This is considered the primary group for this user. A user can be assigned to multiple groups (teams).
    • List all users:

      s9s user --list --long
      
      Example
      $ s9s user --list --long
      A ID UNAME      GNAME  EMAIL REALNAME
      -  1 system     admins -     System User
      -  2 nobody     nobody -     -
      A  3 dba        users  -     -
      -  4 remote_dba users  -     -
      
    • Create a user called "john" with administrative privilege:

      s9s user \
          --create \
          --group=admins \
          --generate-key \
          --new-password=s3cr3tP455 \
          --email-address=[email protected] \
          --first-name=John \
          --last-name=Doe \
          --batch \
          john
      

    See Users and teams for details.

  • LDAP/AD integration: Replicate the LDAP configuration. See LDAP for details.

Notification Settings

  • For email recipients, add them manually, copying from what you have on ClusterControl v1:

    1. To configure an SMTP mail server, go to ClusterControl GUI → Settings → Configure now. It will open a wizard called "Configure mail server". Choose SMTP Server.

    2. In the "Configure mail server: SMTP" window, configure the following:

      • Server Address: The SMTP mail server address that you are going to use to send the email.
      • Port: The SMTP port for the mail server. Usually, this value is 25 or 587, depending on your SMTP mail server configuration.
      • User name: The SMTP user name. Leave empty if no authentication required.
      • Password: The SMTP password for User name. Leave empty if no authentication required.
      • Reply to/from: Specify the sender of the email. This will appear in the 'From' field of the mail header.
      • TLS/SSL required: Check this box if ou want to use TLS/SSL for extra security. The mail server must support TLS/SSL.
    3. Check the Send test email to test out the SMTP configuration. If you receive an email from ClusterControl, click Save and proceed to configure the mail recipients.

    See Email Notifications for details.

  • Third-party integrations (Slack, PagerDuty): Configure again by going to ClusterControl GUI → Settings → Notification services → Add new integration. This will give you a dialog box to choose the notification service to use:

    Notifications Services

    See Notification services for details.

SSL Certificates

Import SSL keys and certificates into ClusterControl’s certificate repository. To access this go to ClusterControl GUI → Settings → Certificate management → More → Import certificate:

Import SSL Certificates

The imported keys and certificates can then be used to enable SSL encryption for server-client connection, replication, or backup at a later stage. Before you perform the important action, bear in mind to:

  1. Upload your certificate and key to a directory in the ClusterControl Controller host.
  2. Uncheck the Self-signed Certificate checkbox if the certificate is not self-signed.
  3. You need to also provide a CA certificate if the certificate is not self-signed.
  4. Duplicate certificates will not be created.
Field Description
Destination Path
  • Where you want the certificate to be imported to. Click on the file explorer window on the left to change the path.
Save As
  • Certificate name.
Certificate File
  • Absolute path to the certificate file. For example: /home/user/ssl/file.crt.
Private Key File
  • Absolute path to the key file. For example: /home/user/ssl/file.key.
Self-signed Certificate
  • Uncheck the checkbox if the certificate is not self-signed.
Import
  • Start the import process.

Backup Schedules

Use the UI or CLI (s9s backup) to re-create your backup schedules. To access this go to ClusterControl GUI → Select cluster → Backups → Create backup → Schedule backup:

Schedule Backup in ClusterControl UI

See Schedule Backup for details.

Scheduled Operational Reports

Set up Scheduled Operational reports (weekly/monthly) and delivery email addresses as needed:

Schedule Backup in ClusterControl UI

  1. In the ClusterControl UI, go to Sidebar → Operational Reports.
  2. Choose the Schedules tab, and create schedule button.
  3. Choose the cluster, type of the template reports, time ranges, and fill the email recipients.
  4. Set the schedule for delivery of the report.

See Operational Reports for details.

Post-Migration Checks

After migration, it is crucial to perform a series of post-migration checks and optimizations to ensure the new system is operating efficiently and reliably. This comprehensive process typically involves:

  1. Cluster functionality

    • Regularly verify monitoring systems are operational, accurately collecting data, and capable of timely anomaly notifications. This includes checking dashboards and alert configurations to ensure all metrics are tracked.
    • In GUI v2, proceed to validate the topology diagrams. This crucial step ensures that the visual representation of your system's architecture accurately reflects its actual configuration and intended design.
    • Simulate a failover to test resilience, observe transition time, and client reconnection. However, before that, ensure data synchronization, cluster health, and application awareness.
  2. The Cluster ID (UUID) will be different from that in GUI v1, requiring immediate updates to all external tools and scripts that rely on it. Failure to update will cause errors.

  3. Check alert thresholds, backup schedules, and user privileges:

    • Alert Thresholds: Fine-tune CPU, memory, and disk space alerts to ensure effective early warnings without false positives. Verify notification channels and escalation procedures.
    • Backup Schedules: Confirm backup frequency/retention, regularly test integrity/restorability, monitor completion, and verify secure offsite storage.
    • User Privileges: Conduct access reviews for all the users. Confirming the privileges assigned to users.