Table of Contents
Provides information and management options for all backups and schedules for all clusters managed by ClusterControl. Clicking on the Backups word (sidebar menu) will list out all backups on the main panel, each of the entries has a summary with a dropdown management menu (see Backup Actions).
To create an on-demand backup, or create a new backup schedule, click on Create backup. See Create Backup.
Each backup entry will have the following details:
- ID: The backup ID. Every attempted backup will be assigned a backup ID, regardless of the backup status (completed, failed, defined, etc).
- Info: Roll over the info icon to see the cluster information and status.
- Cluster: The cluster name.
- Method: The backup method used to create the backup.
- Status: The backup job status.
- Created: The backup creation time since this page is refreshed.
- Size: The backup size, if the backup is successfully created.
- Backup Host: The host where ClusterControl performs the backup.
- Storage: The backup storage location. Rollover on the server’s icon will reveal the local storage path, while the cloud’s icon will reveal the cloud storage provider and its path (if any).
- Actions: Backup management functions. See Backup Actions.
Clicking on the Backup Schedules tab will list all backup schedules. Each entry will have the following details:
- Name: Backup schedule’s name.
- Cluster: The cluster name.
- Method: The selected backup method for this backup schedule.
- Status: The backup schedule status, either
Active
orPaused
. - Schedule: The time schedule to run the backup job.
- Backup Host: The host where ClusterControl will perform the backup.
- Storage Host: The host where ClusterControl will store the created backup.
- Storage Location: The location to store the created backup.
- Last Execution: The last time ClusterControl performed the backup job for this backup schedule.
- Actions: Backup schedule management features. See Backup Schedule Actions.
Clicking on the Elastic Repositories tab will list all Elasticsearch snapshot repositories for the chosen cluster in the dropdown. A repository is a mandatory configuration when deploying a new Elasticsearch cluster using ClusterControl. See Elasticsearch.
Create Backup
Creates a new backup instantly or configures a new backup schedule, to be run at a specific time or interval. ClusterControl provides multiple ways to configure your database backup depending on the database cluster type. By clicking on the Create backup button, you will be presented with the following 3 options:
- Backup on demand – Create a full backup instantly. You can choose to store the backup on-premises (locally on the ClusterControl server or the database node itself), or you can store it in the supported cloud storage like AWS S3, Google Cloud Storage, Azure Blob Storage, or any S3-compatible object storage provider. The backup can be a compressed and encrypted backup.
- Schedule a backup – Define a schedule to periodically create backups. You can choose to store the backup on-premises (locally on the ClusterControl server or the database node itself), or you can store it in the supported cloud storage like AWS S3, Google Cloud Storage, Azure Blob Storage, or any S3-compatible object storage provider. The backup can be a compressed and encrypted backup. For a scheduled backup, you can also perform backup verification and automatic backup host failover.
- Elastic snapshot repository – Creates Elasticsearch snapshot backup which can be stored locally or on AWS S3.
Backup on Demand
This feature allows users to create a new database backup instantly. Only full and differential backup methods are available. If one wants to create an incremental backup, use Schedule a Backup feature instead.
The backup options are depending on the cluster type:
- MySQL/MariaDB
- MongoDB
- PostgreSQL/TimescaleDB
- Redis
- Microsoft SQL Server
- Elasticsearch
- Valkey Cluster
MySQL/MariaDB
MySQLPercona ServerPercona XtraDB ClusterMariaDB (Server and Galera Cluster)To create an instant backup, you have the option to create a full backup using mysqldump, Percona Xtrabackup, or MariaDB Backup. Incremental backup using Percona Xtrabackup or MariaDB Backup (MariaDB only) is only available under Schedule a Backup. The successful backups can be stored on the database host that performs the backup, or the files can be streamed over to the ClusterControl host for centralized storage.
Field | Description |
---|---|
Configuration | |
Backup method |
|
Dump type |
|
Backup host |
|
Upload backup to cloud |
|
Advanced Settings | |
Compression |
|
Compression level |
|
Enable encryption |
|
Use QPress compression |
|
Use extended insert |
|
Retention |
|
Desync node during backup |
|
Backup locks |
|
Lock DDL per table |
|
Xtrabackup parallel copy threads |
|
Network streaming throttle rate (MB/s) |
|
Use PIGZ for parallel gzip |
|
Enable partial backup |
|
Databases |
|
One dump file per DB |
|
Local Storage | |
Storage location |
|
Storage directory |
|
Backup subdirectory |
|
Cloud Storage | |
Credentials |
|
Bucket |
|
Retention period |
|
Delete after upload |
|
MongoDB
Percona Backup for MongoDB is available for both Replica Set and Sharded Cluster. For mongodump, the option is only available for Replica Set. Note that Percona Backup for MongoDB requires a remote file server mounted to a local directory, e.g, NFS.
Field | Description |
---|---|
Configuration | |
Backup method |
|
Upload backup to cloud |
|
Advanced Settings | |
Enable encryption |
|
Retention |
|
Local Storage | |
Storage location |
|
Storage directory |
|
Backup subdirectory |
|
Cloud Storage | |
Credentials |
|
Bucket |
|
Retention period |
|
Delete after upload |
|
PostgreSQL/TimescaleDB
To create an instant backup, you have the option to create a full backup using pg_dumpall, pg_basebackup and pgbackrestfull. Incremental backups using pgbackrestincr or pgbackrestdiff are only available under Schedule a Backup. The successful backups can be stored on the database host that performs the backup, or the files can be streamed over to the ClusterControl host for centralized storage.
Field | Description |
---|---|
Configuration | |
Backup method |
|
Backup host |
|
Upload backup to cloud |
|
PITR enabled |
|
Advanced settings | |
Use compression |
|
Compression level |
|
Enable encryption |
|
Retention |
|
Enable partial backup |
|
Local storage |
|
Storage location |
|
Storage directory |
|
Backup subdirectory |
|
Cloud Storage | |
Credentials |
|
Bucket |
|
Retention period |
|
Delete after upload |
|
Redis
Field | Description |
---|---|
Configuration | |
Backup host |
|
Backup method |
|
Upload backup to cloud |
|
Advanced Settings | |
Compression |
|
Compression level |
|
Enable encryption |
|
Retention |
|
Local Storage | |
Storage location |
|
Storage directory |
|
Backup subdirectory |
|
Cloud Storage | |
Credentials |
|
Bucket |
|
Retention period |
|
Delete after upload |
|
Microsoft SQL Server
Field | Description |
---|---|
Configuration | |
Backup host |
|
Backup method |
|
Upload backup to cloud |
|
Advanced Settings | |
Compression |
|
Include system databases |
|
Retention |
|
Local Storage | |
Storage location |
|
Storage directory |
|
Backup subdirectory |
|
Cloud Storage | |
Credentials |
|
Bucket |
|
Retention period |
|
Delete after upload |
|
Elasticsearch
Field | Description |
---|---|
Configuration | |
Repository |
|
Backup method |
|
Advanced Settings | |
Retention |
|
Valkey
Field | Description |
---|---|
Configuration | |
Backup host |
|
Backup method |
|
Upload backup to cloud |
|
Advanced Settings | |
Compression |
|
Compression level |
|
Enable encryption |
|
Retention |
|
Local Storage | |
Storage location |
|
Storage directory |
|
Backup subdirectory |
|
Cloud Storage | |
Credentials |
|
Bucket |
|
Retention period |
|
Delete after upload |
|
Backup Subdirectory
Variable | Description |
---|---|
B | The date and time when the backup creation was beginning. |
H | The name of the backup host, the host that created the backup. |
i | The numerical ID of the cluster. |
I | The numerical ID of the backup. |
J | The numerical ID of the job that created the backup. |
M | The backup method (e.g. “mysqldump”). |
O | The name of the user who initiated the backup job. |
S | The name of the storage host, the host that stores the backup files. |
% | The percent sign itself. Use two percent signs, %% the same way the standard printf() function interprets it as a one percent sign. |
Schedule a Backup
The functionalities of backup scheduling are almost identical to Create a Backup, however, there are some additional functionalities as shown below:
Field | Description |
---|---|
Configuration | |
Schedule name |
|
Schedule |
|
Advanced settings | |
Failover backup |
|
Failover host |
|
Verify backup | |
Restore backup on |
|
Install database software |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Shut down the server after backup restored |
|
Start backup verification (after completion) |
|
Elastic Snapshot repository
This feature allows users to register an off-cluster storage location called a snapshot repository to a cluster. ClusterControl currently supports S3-compatible snapshot storage (requires a proper pre-configured cloud credential, see Cloud credential login).
One has to specify the repository name, storage location, cloud storage credentials, and an already created bucket:
Backup Actions
Provides shortcuts to the backup’s functionality.
Feature | Description |
---|---|
Restore |
|
Logs |
|
Upload |
|
Download |
|
Delete |
|
Backup Schedule Actions
This section explains the backup schedule functionalities for existing backup schedules. If you want to create a new backup schedule, see Schedule a Backup.
Feature | Description |
---|---|
Pause |
|
Edit |
|
Delete |
|
Backup Methods
This section explains the backup method used by ClusterControl.
The backup process performed by ClusterControl is running as a background thread (RUNNING3) which doesn’t block any other non-backup jobs in the queue. If the backup job takes hours to complete, other non-backup jobs can still run simultaneously via the main thread (RUNNING). You can see the job progress at ClusterControl → Logs → Jobs.
mysqldump
MySQLPercona ServerPercona XtraDB ClusterMariaDB (Server and Galera Cluster)ClusterControl performs mysqldump against all or selected databases by using the --single-transaction
option. It automatically performs mysqldump with --master-data=2
if it detects binary logging is enabled on the particular node to generate a binary log file and position statement in the dump file. ClusterControl generates a set of 4 mysqldump files with the following suffixes:
_data.sql.gz
– Schemas’ data._schema.sql.gz
– Schemas’ structure._mysqldb.sql.gz
– MySQL system database._triggerseventroutines.sql.gz
– MySQL triggers, events, and routines.
Percona Xtrabackup
MySQLPercona ServerPercona XtraDB ClusterPercona Xtrabackup is an open-source MySQL hot backup utility from Percona. It is a combination of xtrabackup (built in C) and innobackupex (built on Perl) and can back up data from InnoDB, XtraDB, and MyISAM tables. Xtrabackup does not lock your database during the backup process. For large databases (100+ GB), it provides a much better restoration time as compared to mysqldump. The restoration process involves preparing MySQL data from the backup files before replacing or switching it with the current data directory on the target node.
Since its ability to create full and incremental MySQL backups, ClusterControl manages incremental backups, and groups the combination of full and incremental backups in a backup set. A backup set has an ID based on the latest full backup ID. All incremental backups after a full backup will be part of the same backup set. The backup set can then be restored as one single unit using the Restore Backup feature.
Without a full backup to start from, incremental backups are useless.
MariaDB Backup
MariaDB (Server and Galera Cluster)MariaDB Backup is a fork of Percona XtraBackup with added support for compression and data-at-rest encryption available in MariaDB, included in MariaDB 10.1.23 and later. It is an open-source tool provided by MariaDB for performing physical online backups of InnoDB, Aria, and MyISAM tables. MariaDB Backup is available on Linux and Windows.
On all supported versions of MariaDB 10.1 and 10.2, ClusterControl will default to MariaDB Backup as the preferred backup method and SST method.
pg_dumpall
PostgreSQL TimescaleDBClusterControl performs pg_dumpall against all databases (or pg_dump for individual databases) with the --clean
option, which includes SQL commands to clean (drop) databases before recreating them. DROP commands for roles and tablespaces are added as well. The output will be .sql.gz
extension and the file name contains the timestamp of the backup.
It is possible to take a backup for an individual database by choosing pg_dumpall as the backup method, and then toggle “Enable partial backup”. Specify the database that you want to back up and ClusterControl will perform the backup operation using pg_dump command line instead.
pg_basebackup
PostgreSQL TimescaleDBpg_basebackup is used to take base backups of a running PostgreSQL database cluster. These are taken without affecting other clients to the database and can be used both for point-in-time recovery and as the starting point for a log shipping or streaming replication standby server. It makes a binary copy of the database cluster files while making sure the system is put in and out of backup mode automatically. Backups are always taken of the entire database cluster; it is not possible to back up individual databases or database objects.
ClusterControl connects to the replication stream using the replication user (default is cmon_replication
) with --wal-method=fetch
option when creating the backup. The output will be base.tar.gz
inside the backup directory.
pgBackRest
PostgreSQL TimescaleDBpgBackRest is an open-source software developed to perform efficient backup on PostgreSQL databases that measure in tens of terabytes and greater. It supports per-file checksums, compression, partial/failed backup resume, high-performance parallel transfer, asynchronous archiving, tablespaces, expiration, full/differential/incremental, local/remote operation via SSH, hard-linking, restore, and more. The tool is written in Perl and does not depend on rsync or tar but instead performs its own deltas which gives it maximum flexibility.
Starting from ClusterControl 1.9.0, pgBackRest can be configured as follows:
- Primary: Install on the current primary but not on replicas. The backup repository (host to store the backup data) will be configured to be on the primary node. There will be no SSH configuration for pgBackRest.
- All database nodes: Install on all database nodes. The backup repository will be created on the current primary node. The backup will be made by using a standby node. PgBackRest will use SSH for communication between hosts.
- All database nodes and a dedicated repository host: Install on all PostgreSQL database nodes. The backup repository will be made on a specified host. The backup will be made by using a standby node. PgBackRest will use SSH for communication between hosts.
The pgbackrest backup directory cannot be reused for other backup methods.
During the first attempt of making pgBackRest backup, ClusterControl will re-configure the node to install and configure pgBackRest. Take note that this operation requires a database restart and might introduce downtime to your database. A configuration file will be created at /etc/pgbackrest.conf
and will be configured according to the version used and the location of the PostgreSQL data. The pgBackRest default repository path is located at /var/lib/pgbackrest
. Additionally, ClusterControl will configure the following lines inside postgresql.conf
(which explains why it requires restart during the first run):
archive_mode = on # enables archiving; off, on, or always # (change requires restart)
archive_command = 'pgbackrest --stanza=clustercontrol-stanza archive-push %p' # command to use to archive a logfile segment
In the ClusterControl GUI, pgBackRest nodes are listed with a fake port number (200000 + cluster ID), due to the internal CmonHost object requirement for a host:port combination. As a matter of fact, the pgBackRest process is not daemonized, nor need a port number.
Full Backup
Full backup of pgBackRest copies the entire contents of the database cluster to the backup. The first backup of the database cluster is always a Full Backup. pgBackRest is always able to restore a full backup directly. The full backup does not depend on any files outside of the full backup for consistency.
Differential Backup
For differential backup, pgBackRest copies only those database cluster files that have changed since the last full backup. pgBackRest restores a differential backup by copying all of the files in the chosen differential backup and the appropriate unchanged files from the previous full backup. The advantage of a differential backup is that it requires less disk space than a full backup, however, the differential backup and the full backup must both be valid to restore the differential backup.
For example, if a full backup is taken on Sunday and the following daily differential backups are scheduled, the data that is backed up will be:
- Monday – data from Sunday to Monday
- Tuesday – data from Sunday to Tuesday
- Wednesday – data from Sunday to Wednesday
- Thursday – data from Sunday to Thursday
Incremental Backup
For incremental backup, pgBackRest copies only those database cluster files that have changed since the last backup (which can be another incremental backup, a differential backup, or a full backup). As an incremental backup only includes those files changed since the prior backup, they are generally much smaller than full or differential backups. As with the differential backup, the incremental backup depends on other backups to be valid to restore the incremental backup. Since the incremental backup includes only those files since the last backup, all prior incremental backups back to the prior differential, the prior differential backup, and the prior full backup must all be valid to perform a restore of the incremental backup. If no differential backup exists then all prior incremental backups back to the prior full backup, which must exist, and the full backup itself must be valid to restore the incremental backup.
For example, if a full backup is taken on Sunday and the following daily incremental backups are scheduled, the data that is backed up will be:
- Monday – data from Sunday to Monday
- Tuesday – data from Monday to Tuesday
- Wednesday – data from Tuesday to Wednesday
- Thursday – data from Wednesday to Thursday
mongodump
MongoDB Replica SetClusterControl uses the standard command to perform mongodump with --journal
, which allows mongodump operations to use the durability journal to ensure that the export is in a consistent state against shards. This backup method is only available for MongoDB Replica Set.
mongodump (and mongorestore) is not available for MongoDB Sharded Cluster 4.2 and later that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.
Percona Backup for MongoDB
MongoDB Replica Set MongoDB Sharded ClusterPercona Backup for MongoDB is a distributed, low-impact solution for achieving consistent backups of MongoDB sharded clusters and replica sets. Percona Backup for MongoDB supports Percona Server for MongoDB and MongoDB Community v3.6 or higher with MongoDB Replication enabled (standalone is not supported due to the dependency on MongoDB’s oplog). The Percona Backup for MongoDB project inherited from and replaces mongodb-consistent-backup
, which is no longer actively developed or supported.
Percona Backup for MongoDB requires an extra step for installation and configuration, and it is not enabled by default. You can use ClusterControl to install this tool by going to Clusters → choose the MongoDB cluster→ Cluster Action → Install Percona Backup, or simply choose the backup method percona-backup-mongodb from the dropdown list when configuring a backup method for a backup. If the tool is not installed, ClusterControl will advise on installing the backup tool first before moving forward to the next step.
If you have older versions of MongoDB i.e. < 3.6 versions, the only method to take a backup is mongodump.
Percona Backup for MongoDB requires a shared file system on a remote file server mounted to a local directory, e.g, NFS. It is the responsibility of the server administrators to guarantee that the same remote directory is mounted at exactly the same local path on all servers in the MongoDB cluster or non-sharded replica set. If the path is accidentally a normal local directory, errors will eventually occur, most likely during a restore attempt.
Redis Database Backup
Redis Redis SentinelRDB is Redis Database Backup file. It is a dump of all user data stored in an internal, compressed serialization format at a particular timestamp which is used for point-in-time recovery (recovery from a timestamp). AOF stands for Append Only File, which is actually a persistence technique in which an RDB file is generated once and all the data is appended to it as it comes.
When backing up Redis, ClusterControl backs up both RDB and AOF (if enabled) files on the selected database node.
MSSQL Backup
Microsoft SQL Server 2019ClusterControl performs a backup routine on a Microsoft SQL Server by using the sqlcmd
client to connect to the SQL Server and take backups. For a full backup, ClusterControl will iterate against all databases and create a standard full backup. Full database backups represent the database at the time the backup is finished.
For differential backup, it only backs up the data that has changed since the last full backup. This type of backup requires you to work with fewer data than a full database backup, while also shortening the time required to complete a backup. All the differential backups will be grouped under the last full backup on the Backup page.
Minimally, one must have created at least one full backup before one can create any log backups. After that, the transaction log can be backed up at any time unless the log is already being backed up. It is recommended to take log backups frequently, both to minimize work loss exposure and to truncate the transaction log.
A database administrator typically creates a full database backup occasionally, such as weekly, and, optionally, creates a series of differential database backups at shorter intervals, such as daily. Independent of the database backups, the database administrator backs up the transaction log at frequent intervals.
One of the limitations of using MSSQL Backup is backups created by a more recent version of SQL Server cannot be restored in earlier versions of SQL Server.
Elasticsearch Snapshot
An Elasticsearch snapshot is a way to back up your Elasticsearch indices and data. The process involves creating a snapshot of one or more indices and storing it in a designated repository. This snapshot can then be used to restore the indices to their previous state in case of data loss or other issues. The snapshot process is incremental, meaning that only changes made since the last snapshot are stored. Snapshots can be taken manually or automatically on a schedule, and can also be used to migrate data between clusters or to create new indices.
ClusterControl configures the Elasticsearch snapshot storage at the deployment phase. At this stage, one has to specify the repository name, storage location, and file system path:
Valkey Database Backup
Valkey standalone Valkey ClusterRDB is Valkey Database Backup file. It is a dump of all user data stored in an internal, compressed serialization format at a particular timestamp which is used for point-in-time recovery (recovery from a timestamp). AOF stands for Append Only File, which is actually a persistence technique in which an RDB file is generated once and all the data is appended to it as it comes.
When backing up Valkey, ClusterControl backs up both RDB and AOF (if enabled) files on the selected database node.
Backup Encryption and Decryption
If the encryption option is enabled for a particular backup, ClusterControl will use OpenSSL to encrypt the backup using the AES-256 CBC algorithm. Encryption happens on the backup node. If you choose to store the backup on the controller node, the backup files are streamed over in encrypted format through socat or netcat.
If compression is enabled, the backup is first compressed and then encrypted resulting in smaller backup sizes. The encryption key will be generated automatically (if not exists) and stored inside the CMON configuration for the particular cluster under backup_encryption_key
the option. This key is stored with base64 encoded and should be decoded first before using it as an argument to pass when decrypting the backup. The following command shows how to decode the key:
$ cat /etc/cmon.d/cmon_X.cnf | grep ^backup_encryption_key | cut -d"'" -f2 | base64 -d > keyfile.key
Where X is the cluster-ID. The above command will read the backup_encryption_key
value and decode the value to a binary output. Thus, it is important to redirect the output to a file, as in the example, we redirected the output to keyfile.key
. The key file that stores the actual encryption key can be used in the OpenSSL command to decrypt the backup, for example:
$ cat {BACKUPFILE}.aes256 | openssl enc -d -aes-256-cbc -pass file:/path/to/keyfile.key > backup_file.sql.gz
Or, you can pass the stdin to the respective restore command chain, for example:
$ cat {BACKUPFILE}.aes256 | openssl enc -d -aes-256-cbc -pass file:/path/to/keyfile.key | gunzip | psql -p5432 -f-
Backup Settings
Manages the backup default settings for the corresponding cluster.
Feature | Description |
---|---|
Default backup directory |
|
Default subdirectory |
|
Netcat port |
|
Enable hash check on created backup files |
|
Default backup retention period |
|
Default cloud backup retention period |
|
Backup Subdirectory
Variable | Description |
---|---|
B | The date and time when the backup creation was beginning. |
H | The name of the backup host, the host that created the backup. |
i | The numerical ID of the cluster. |
I | The numerical ID of the backup. |
J | The numerical ID of the job that created the backup. |
M | The backup method (e.g. “mysqldump”, “pg_basebackup”, “mongodump”). |
O | The name of the user who initiated the backup job. |
S | The name of the storage host, the host that stores the backup files. |
% | The percent sign itself. Use two percent signs, %% the same way the standard printf() function interprets it as one percent sign. |