Skip to content

Backup and Restore

Starting from ClusterControl 1.7.1, you can backup the ClusterControl server and restore it (together with metadata about your managed databases) onto another server using ClusterControl CLI. It backs up the ClusterControl application as well as all of its configuration data.

There are basically 4 options available under s9s backup command:

Flag Description
--save-controller Saves the state of the controller into a tarball.
--restore-controller Restores the entire controller from a previously created tarball (created by using the --save-controller)
--save-cluster-info Saves the information the controller has about one cluster.
--restore-cluster-info Restores the information the controller has about a cluster from a previously created archive file.

Back up all clusters

To back up the ClusterControl controller with all clusters together with their metadata, simply run the following command on the ClusterControl node as root user (or with sudo):

s9s backup \
    --save-controller \
    --backup-directory=$HOME/ccbackup \
    --output-file=controller.tar.gz \
    --log

The --output-file must be a filename or physical path (if you want to omit the --backup-directory flag), and the file must not exist beforehand. ClusterControl will not replace the output file if it already exists. By specifying the --log flag, it will wait until the job is executed and the job logs will be shown in the terminal. The same logs can be accessed via ClusterControl GUIActivity centerJobsSave Controller.

The save controller job basically performs the following procedures:

  1. Retrieve the controller configuration and export it to JSON.
  2. Export CMON database as MySQL dump file.
  3. For every database cluster:
    1. Retrieve the cluster configuration and export it to JSON.

Note

In the output, you may notice the job found is N + 1 cluster, for example,Found 3 cluster(s) to save even though we only have two database clusters. This includes cluster ID 0, which carries special meaning in ClusterControl as the global initialized cluster. However, it does not belong to the CmonCluster component, which is the database cluster under ClusterControl management.

Back up an individual cluster

To backup ClusterControl individual cluster with its metadata, use the --cluster-id option to specify the cluster ID on the ClusterControl node as root user (or with sudo):

s9s backup \
    --save-cluster-info \
    --cluster-id=2 \
    --backup-directory=$HOME/ccbackup \
    --output-file=cc-replication-2.tar.gz \
    --log

See also

For examples and instructions, see this blog post, How to Backup and Restore ClusterControl.

Restoring to another ClusterControl server

  1. Start by installing the same ClusterControl version on the new host. See Installation.

  2. Before performing the restoration, make sure the same SSH key being used by ClusterControl to access the database nodes is available in the same path as the old server. For example, if the key file is located under /root/.ssh/id_rsa, make sure the same path and file exists on the new ClusterControl host.

  3. Perform the restoration:

    s9s backup \
        --restore-controller \
        --input-file=$HOME/controller.tar.gz \
        --debug \
        --log
    
  4. Verify if the clusters are restored correctly by using the following command:

    s9s cluster --list --long