Skip to content

Migrating-Existing-Clusters-from-ClusterControl-v1-to-ClusterControl-v2

In this article, we explore the procedure for migrating existing database clusters from the ClusterControl GUI v1 to the new ClusterControl GUI v2. This migration strategy is designed to ensure continuous operational integrity and minimize downtime throughout the transition. This approach is particularly advantageous for organizations and users who prefer a phased adoption strategy, allowing for thorough testing and validation of the new ClusterControl v2 interface and its enhanced functionalities in a controlled environment before a full-scale deployment. By following these guidelines, users can seamlessly transition their critical database infrastructure, leveraging the advancements offered by ClusterControl v2 while maintaining stability and performance.

Limitations of This Method

Before proceeding, it's important to understand the key limitations:

  • All operations that depend on Cluster UUIDs may be impacted due to the change of the UUIDs.
  • You begin with new monitoring data as historical metrics and logs are not migrated.
  • Manual tuning may be required for certain settings after migration (e.g., failover thresholds).
  • Custom scripts and hooks from GUI v1 require manual recreation, as they will not be imported automatically.
  • No rollback — Reverting to GUI v1 after migration and disabling it is not straightforward.

Pre-Migration Checks

Thorough pre-migration checks are essential for a seamless and successful migration. They mitigate risks, prevent data loss, minimize downtime, and guarantee system integrity, averting complications, delays, increased costs, and disruptions:

  • In ClusterControl GUI v1, verify the health and stability of your clusters.
  • Review custom alerting, operational reports schedules, and backup schedules.
  • Make a list of integrations: email, LDAP, cloud backups, notifications, 3rd party integrations etc.
  • Make a list of users who log in to ClusterControl GUI v1.
  • Ensure SSH access to all nodes is functional for the user running ClusterControl in your new environment with ClusterControl GUI v2.

Installing ClusterControl v2

Follow these steps:

  1. Prepare a new host or VM (can be bare-metal or cloud).

  2. Download and install ClusterControl GUI v2 from the official installer or repository. On the ClusterControl server, run the following commands:

    wget <https://severalnines.com/downloads/cmon/install-cc>
    chmod +x install-cc
    sudo ./install-cc # omit sudo if you run as root
3. Configure passwordless SSH access to the database nodes. See SSH Key-based Authentication.

Importing Existing Clusters into ClusterControl v2

The Import Database Cluster feature in ClusterControl enables users to bring existing database clusters under management with minimal effort. Instead of deploying a new cluster, this option allows you to integrate and monitor your already running database environments seamlessly.

Prerequisites

Before deploying any database cluster, several prerequisites must be met:

  1. Make sure the target database nodes are running on a supported architecture platform and operating system. See Hardware and Operating System.
  2. Passwordless SSH (SSH using key-based authentication) is configured from the ClusterControl node to all database nodes. See SSH Key-based Authentication.
  3. Verify that sudo is working properly if you are using a non-root user. See Operating System User.
  4. The target cluster must not be in a degraded state. For example, if you have a three-node Galera cluster, all nodes must be alive, accessible, and in sync.

Below are examples for each supported database type:

Importing a MySQL Cluster

For this to work, ClusterControl expects all instances within a group to use the same MySQL root password. It will also automatically try to identify each server's role (primary, replica, multi, or standalone).

Go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "MySQL Replication". See more on Import MySQL Cluster.

image

    s9s cluster --register \
        --cluster-type=mysqlreplication\
        --nodes="192.168.100.31;192.168.100.32" \
        --vendor=MariaDB \
        --provider-version=10.11 \
        --db-admin="root" \
        --db-admin-passwd="root123" \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --cluster-name="My MariaDB Cluster" \
        --wait

Importing a PostgreSQL Cluster

For this to work, ClusterControl expects all instances within a group to use the same database administrator password. It will also automatically discover the database role of each node (master, slave, TimescaleDB extension).

Go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "PostgreSQL Streaming". See more on Import PostgreSQL Cluster

image

    s9s cluster --register \
        --cluster-type=postgresql\
        --nodes="192.168.100.34;192.168.100.35" \
        --provider-version=15 \
        --db-admin="root" \
        --db-admin-passwd="root123" \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --cluster-name="My PostgreSQL Cluster" \
        --wait

Importing a MongoDB Cluster

For this to work, ClusterControl expects all instances within a group to use the same MongoDB admin user password. It will also automatically discover the database role of each node (mongos, config server, shard server, primary, secondary, arbiter).

Go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "MongoDB ReplicaSet". See more on Import MongoDB Cluster

image

    s9s cluster --register \
        --cluster-type=mongodb\
        --nodes="192.168.100.38;192.168.100.39" \
        --vendor=Percona \
        --provider-version=6 \
        --db-admin="root" \
        --db-admin-passwd="root123" \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --cluster-name="My MongoDB Cluster" \
        --wait

Importing Configurations from ClusterControl v1

While cluster monitoring is imported during the steps above, the following configurations must be manually migrated or re-created in v2:

Users & Access

  • CC users and teams: Re-create admin/operator accounts. See Users and teams

  • LDAP/AD integration: Replicate the LDAP configuration. See LDAP

Notification Settings

SSL Certificates

  • Import SSL keys and certificates into ClusterControl’s certificate repository. See SSL Certificates

Backup Schedules

  • Use the UI or CLI (s9s backup) to re-create your backup schedules. See Schedule Backup

Operational Reports

  • Set up Operational reports (weekly/monthly) and delivery email addresses as needed. See Operational Reports

Post-Migration Checks

After migration, it is crucial to perform a series of post-migration checks and optimizations to ensure the new system is operating efficiently and reliably. This comprehensive process typically involves:

  1. Cluster Functionality

  2. Regularly verify monitoring systems are operational, accurately collecting data, and capable of timely anomaly notifications. This includes checking dashboards and alert configurations to ensure all metrics are tracked.

  3. In GUI v2, proceed to validate the topology diagrams. This crucial step ensures that the visual representation of your system's architecture accurately reflects its actual configuration and intended design.
  4. Simulate a failover to test resilience, observe transition time, and client reconnection. However, before that, ensure data synchronization, cluster health, and application awareness.

  5. Cluster ID & Identity

The Cluster ID (UUID) will be different from that in GUI v1, requiring immediate updates to all external tools and scripts that rely on it. Failure to update will cause errors.

  1. Settings Validation

Check alert thresholds, backup schedules, and user privileges: - Alert Thresholds: Fine-tune CPU, memory, and disk space alerts to ensure effective early warnings without false positives. Verify notification channels and escalation procedures. - Backup Schedules: Confirm backup frequency/retention, regularly test integrity/restorability, monitor completion, and verify secure offsite storage. - User Privileges: Conduct access reviews for all the users. Confirming the privileges assigned to users.