1. Home
  2. Docs
  3. ClusterControl
  4. User Guide (GUI v2)
  5. Clusters

Clusters

Provides information and management options for database clusters managed by ClusterControl. Clicking on the Clusters word (sidebar menu) will list out all database clusters on the main panel, each of the entries has a real-time summary of the cluster, together with the dropdown management menu (see Cluster Actions):

Each entry will have the following details:

  • Cluster name: Configurable during import or deployment, also via the s9s command line.
  • Cluster ID: Immutable ID assigned by ClusterControl after it is registered.
  • Cluster type: The database cluster type, recognized by ClusterControl. See Supported Databases.
  • Nodes: All nodes grouped under this particular cluster. See Nodes.
  • Auto recovery: The ClusterControl automatic recovery settings. See Automatic Recovery.
  • Load: Last 5 minutes of cluster load average.
  • : Every supported database cluster has its own set of dropdown menus. See Cluster Actions.

Choose a database cluster from the expandable list on the left-side menu to drill down on the cluster-specific features. When selecting a cluster, the navigation breadcrumbs (top panel) will reflect with the corresponding cluster.

Cluster Actions

Provides shortcuts to the main cluster functionality. Each database cluster has its own set of functionality as described below:

  1. MySQL/MariaDB (replication/standalone)
  2. Galera Cluster
  3. PostgreSQL/TimescaleDB (streaming replication/standalone)
  4. MongoDB
  5. Redis
  6. Microsoft SQL Server
  7. Elasticsearch

MySQL/MariaDB (replication/standalone)

MySQLPercona  ServerMariaDB Server
Feature Description
Add new → Replication node
  • Adds a new or existing replica database node into the cluster. The new replica will automatically join and synchronize with the rest of the cluster. See Add Replication Node.
Add new → Load balancer
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Encryption settings → Change SSL Certificate
  • Opens the SSL certificate management wizard which allows you to create or choose an existing certificate for database client-server encryption.
Encryption settings → SSL encryption
  • Toggles On to enable client-server SSL encryption. This action will restart the whole cluster, one node at a time.
    Toggles Off to disable client-server SSL encryption. This action will restart the whole cluster, one node at a time.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Audit logging
  • Enable audit logging (MariaDB-based clusters only) using policy-based monitoring and logging of connection and query activity. This feature will enable audit logging on all nodes in the cluster. See Audit Logging.
Readonly
  • Toggles On to enable cluster-wide read-only. ClusterControl will configure read_only=ON and super_read_only=ON (except MariaDB) on all database nodes in the cluster.
  • Toggles Off to disable cluster-wide read-only.
Upgrades
  • Performs minor software upgrades for the database and load balancer software of this cluster, for example from 8.0.x to 8.0.y, in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor. See Upgrades.
Edit details
  • Edit the cluster’s name and tags.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

Galera Cluster

MySQLPercona  XtraDB ClusterMariaDB (Galera Cluster)
Feature Description
Add new → Node
  • Adds a new or existing database node into the cluster. You can scale out your cluster by adding more database nodes. The new node will automatically join and synchronize with the rest of the cluster. See Add Node.
Add new → Replication node
  • Adds a new or existing replica database node into the cluster. The new replica will automatically join and synchronize with the rest of the cluster. See Add Replication Node.
Add new → Load balancer
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Encryption settings → Change SSL Certificate
  • Opens the SSL certificate management wizard which allows you to create or choose an existing certificate for database client-server encryption.
Encryption settings → Change Galera SSL Certificate
  • Opens the SSL certificate management wizard which allows you to create or choose an existing certificate for Galera cluster nodes communication encryption.
Encryption settings → Galera SSL encryption
  • Toggles On to enable Galera SSL encryption. This action will restart the whole cluster, one node at a time.
  • Toggles Off to disable Galera SSL encryption. This action will restart the whole cluster, one node at a time.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Readonly
  • Toggles On to enable cluster-wide read-only. ClusterControl will configure read_only=ON and super_read_only=ON (except MariaDB) on all database nodes in the cluster. This is very useful in a cluster-to-cluster replication setup. See Cluster-Cluster Replication.
  • Toggles Off to disable cluster-wide read-only.
Clone/Replicate the cluster → Clone cluster
  • Copies a database cluster to multiple environments. See Clone Cluster.
Clone/Replicate the cluster → Create replica cluster
Find most advanced node
  • Finds which is the most advanced node in the cluster. This is useful to determine which node to bootstrap if the cluster doesn’t have any primary component or when ClusterControl automatic recovery is disabled.
Upgrades
  • Performs minor software upgrades for the database and load balancer software of this cluster, for example from 8.0.x to 8.0.y, in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor. See Upgrades.
Edit details
  • Edit the cluster’s name and tags.
Rolling restart
  • Performs a cluster restart operation one node at a time. The rolling restart will be aborted if a node fails to be restarted.
  • For the Galera Cluster, toggle On Perform SST to force a full snapshot state transfer. The data directory will be removed from the cluster and the node will be completely resynced from another node. Note that this usually takes a longer time to complete depending on the dataset size.
  • It is also possible to reboot the host one at a time by toggling the Reboot hosts button.
Bootstrap cluster
  • Launches the bootstrap cluster window. ClusterControl will stop all running nodes before bootstrapping the cluster from the selected Galera node. See Bootstrap Cluster.
Stop Cluster
  • Stop all nodes in the cluster. This is the recommended way to perform a graceful shutdown to the cluster.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

PostgreSQL/TimescaleDB (streaming replication/standalone)

PostgreSQL TimescaleDB
Feature Description
Add new → Replication node
  • Adds a new or existing replica database node into the cluster. The new replica will automatically join and synchronize with the rest of the cluster. See Add Replication Node.
Add new → Load balancer
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising up alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Encryption settings → Change SSL Certificate
  • Opens the SSL certificate management wizard which allows you to create or choose an existing certificate for database client-server encryption.
Encryption settings → SSL encryption
  • Toggles On to enable client-server SSL encryption. This action will restart the whole cluster, one node at a time.
    Toggles Off to disable client-server SSL encryption. This action will restart the whole cluster, one node at a time.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Audit logging
  • Enable audit logging using policy-based monitoring and logging of connection and query activity. This feature will enable audit logging on all nodes in the cluster. See Audit Logging.
Create replica cluster
Upgrades
  • Performs major or minor software upgrades for the database and load balancer software of this cluster, in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor. See Upgrades.
Enable TimescaleDB
  • Installs TimescaleDB database extension on every PostgreSQL node. The installation will be performed on one database node at a time, followed by a database restart to apply the changes.
  • Once the PostgreSQL is converted into TimescaleDB, this action will not be reversible. ClusterControl will treat the cluster as a TimescaleDB cluster onwards.
Edit details
  • Edit the cluster’s name and tags.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

MongoDB

MongoDB Replica SetMongoDB Sharded Cluster
Feature Description
Add node
  • Scales the current MongoDB Replica Set or Sharded Cluster deployment by adding a single shard, Mongos, or config server.
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Convert to shard
  •  Only for MongoDB Replica Set.  Converts an existing MongoDB replica set to a sharded cluster by adding Mongos and config servers into the setup.
SSL encryption
  • Toggles On to enable client-server SSL encryption. This action will restart the whole cluster, one node at a time.
    Toggles Off to disable client-server SSL encryption. This action will restart the whole cluster, one node at a time.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Upgrades
  • Performs minor software upgrades for the database and load balancer software of this cluster, in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor. See Upgrades.
Edit details
  • Edit the cluster’s name and tags.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

Redis

Redis standalone Redis Sentinel
Feature Description
Add node
  • Adds a new or existing node into the cluster. You can scale out your cluster by adding more database nodes. The new node will automatically join and synchronize with the rest of the cluster. See Add Node.
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Upgrades
  • This feature is not available yet for Redis-based clusters.
Edit details
  • Edit the cluster’s name and tags.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

Microsoft SQL Server

Microsoft SQL Server
Feature Description
Add node
  • Adds a new or existing node into the cluster. You can scale out your cluster by adding more database nodes. The new node will automatically join and synchronize with the rest of the cluster. See Add Node.
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Upgrades
  • This feature is not available yet for MSSQL-based clusters.
Edit details
  • Edit the cluster’s name and tags.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

Elasticsearch

Feature Description
Add Node
  • Adds a new Elasticsearch master, data, coordinator, or master-data node into the cluster. The new master or data node will automatically join and synchronize with the rest of the cluster. See Add Node.
Schedule maintenance
  • Schedules a cluster-wide maintenance mode, where ClusterControl will skip raising up alarms and notifications while the mode is active.
  • All nodes in the cluster (regardless of the role) will be marked as under maintenance. A global banner will appear if there is upcoming maintenance for the corresponding cluster.
Cluster/node recovery → Cluster recovery
  • Disables or enables cluster recovery.
  • If enabled, ClusterControl will attempt to recover the cluster if necessary. See Cluster Recovery.
Cluster/node recovery → Node recovery
  • Disables or enables node recovery.
  • If enabled, ClusterControl will attempt to recover the node if necessary. See Node Recovery.
Change RPC API token
  • Serves as the authentication string by ClusterControl UI to connect to the CMON RPC interface. Each cluster has its unique token.
Upgrades
  • This feature is not available yet for Elasticsearch-based clusters.
Edit details
  • Edit the cluster’s name and tags.
Remove cluster
  • Removing the cluster will delete all metadata we have on it. The database nodes will not be affected and will continue to operate as normal.
  • One has to explicitly type “REMOVE” in the text field before the cluster removal job is triggered. The cluster can be imported again in the future with fresh new metadata and configuration.

 

Automatic Recovery

ClusterControl is programmed with a number of recovery algorithms to automatically respond to different types of common failures affecting your database systems. It understands different types of database topologies and database-related process management to help you determine the best way to recover the cluster. Some topology managers only cover cluster recoveries but you have to handle the node recovery by yourself. ClusterControl supports recovery at both cluster and node levels.

There are two recovery components supported by ClusterControl:

  1. Cluster – Attempt to recover a cluster to an operational state. Cluster recovery covers recovery attempts to bring up the entire cluster topology. See Cluster Recovery.
  2. Node – Attempt to recover a node to an operational state. Node recovery covers node recovery issues like if a node was being stopped outside of ClusterControl knowledge, e.g, via user-intervention stop command from SSH console or process killed by OOM process. See Node Recovery.

These two components are the most important things to make sure the service availability is as high as possible.

Node Recovery

ClusterControl can recover a database node in case of intermittent failure by monitoring the process and connectivity to the database nodes. For the process, it works similarly to systemd, where it will make sure the MySQL service is started and running unless you intentionally stopped it via ClusterControl UI.

If the node comes back online, ClusterControl will establish a connection back to the database node and will perform the necessary actions. The following is what ClusterControl would do to recover a node:

  1. It will wait for systemd/chkconfig/init to start up the monitored services/processes for 30 seconds.
  2. If the monitored services/processes are still down, ClusterControl will try to start the database service automatically.
  3. If ClusterControl is unable to recover the monitored services/processes, an alarm will be raised.
Note

If a database shutdown is initiated by the user via ClusterControl, ClusterControl will not attempt to recover the particular node at a later stage. It expects the user to start it back via ClusterControl UI the Start Node or by using the OS command explicitly.

The recovery includes all database-related services like ProxySQL, HAProxy, MaxScale, Keepalived, PgBouncer, Prometheus exporters, and garbd. Special attention to Prometheus exporters where ClusterControl uses a program called daemon to daemonize the exporter process. ClusterControl will try to connect to the exporter’s listening port for health check and verification. Thus, it’s recommended to open the exporter ports from ClusterControl and Prometheus server to make sure no false alarms during recovery.

Cluster Recovery

ClusterControl understands the database topology and follows best practices in performing the recovery. For a database cluster that comes with built-in fault tolerance like Galera Cluster, NDB Cluster, and MongoDB Replicaset, the failover process will be performed automatically by the database server via quorum calculation, heartbeat, and role switching (if any). ClusterControl monitors the process and makes necessary adjustments to the visualization like reflecting the changes under the Topology view and adjusting the monitoring and management component for the new role e.g, a new primary node in a replica set.

For database technologies that do not have built-in fault tolerance with automatic recoveries like MySQL/MariaDB Replication and PostgreSQL/TimescaleDB Streaming Replication (see further down), ClusterControl will perform the recovery procedures by following the best practices provided by the database vendor. If the recovery fails, user intervention is required, and of course, you will get an alarm notification regarding this.

In a mixed/hybrid topology, for example, an asynchronous replica that is attached to a Galera Cluster or NDB Cluster, the node will be recovered by ClusterControl if cluster recovery is enabled.

Cluster recovery does not apply to standalone MySQL servers. However, it’s recommended to turn on both node and cluster recoveries for this cluster type in the ClusterControl UI.

 

MySQL/MariaDB Replication

ClusterControl supports recovery of the following MySQL/MariaDB replication setup:

  • Primary-replica with MySQL GTID
  • Primary-replica with MariaDB GTID
  • Primary-primary with MySQL GTID
  • Primary-primary with MariaDB GTID
  • Asynchronous replica attached to a Galera Cluster

ClusterControl will respect the following parameters when performing cluster recovery:

  • enable_cluster_autorecovery
  • auto_manage_readonly
  • repl_password
  • repl_user
  • replication_auto_rebuild_slave
  • replication_check_binlog_filtration_bf_failover
  • replication_check_external_bf_failover
  • replication_failed_reslave_failover_script
  • replication_failover_blacklist
  • replication_failover_events
  • replication_failover_wait_to_apply_timeout
  • replication_failover_whitelist
  • replication_onfail_failover_script
  • replication_post_failover_script
  • replication_post_switchover_script
  • replication_post_unsuccessful_failover_script
  • replication_pre_failover_script
  • replication_pre_switchover_script
  • replication_skip_apply_missing_txs
  • replication_stop_on_error

For more details on each of the parameters, refer to the documentation page.

ClusterControl will obey the following rules when monitoring and managing a primary-replica replication:

  1. All nodes will be started with read_only=ON and super_read_only=ON (regardless of its role).
  2. Only one primary (read_only=OFF) is allowed to operate at any given time.
  3. Rely on the MySQL variable report_host to map the topology.
  4. If there are two or more nodes that have read_only=OFF at a time, ClusterControl will automatically set read_only=ON on both primaries, to protect them against accidental writes. User intervention is required to pick the actual primary by disabling the read-only.

In case the active primary goes down, ClusterControl will attempt to perform the primary failover in the following order:

  1. After 3 seconds of primary unreachability, ClusterControl will raise an alarm.
  2. Check the replica availability, at least one of the replicas must be reachable by ClusterControl.
  3. Pick the replica as a candidate to be a primary.
  4. ClusterControl will calculate the probability of errant transactions if GTID is enabled.
  5. If no errant transaction is detected, the chosen will be promoted as the new primary.
  6. Create and grant the replication user to be used by replicas.
  7. Change the primary for all replicas that were pointing to the old primary to the newly promoted primary.
  8. Start replica and enable read-only.
  9. Flush logs on all nodes.

If the replica promotion fails, ClusterControl will abort the recovery job. User intervention or a cmon service restart is required to trigger the recovery job again.

When the old primary is available again, it will be started as read-only and will not be part of the replication. User intervention is required.

 

PostgreSQL/TimescaleDB Streaming Replication

ClusterControl supports recovery of the following PostgreSQL replication setup:

  • PostgreSQL Streaming Replication
  • TimescaleDB Streaming Replication

ClusterControl will respect the following parameters when performing cluster recovery:

  • enable_cluster_autorecovery
  • repl_password
  • repl_user
  • replication_auto_rebuild_slave
  • replication_failover_whitelist
  • replication_failover_blacklist

For more details on each of the parameters, refer to the documentation page.

ClusterControl will obey the following rules for managing and monitoring a PostgreSQL streaming replication setup:

  • wal_level is set to replica (or hot_standby depending on the PostgreSQL version).
  • The parameter archive_mode is set to ON on the primary.
  • Set recovery.conf file on the replica nodes, which turns the node into a hot standby with read-only enabled.

In case the active primary goes down, ClusterControl will attempt to perform the cluster recovery in the following order:

  1. After 10 seconds of primary unreachability, ClusterControl will raise an alarm.
  2. After 10 seconds of graceful waiting timeout, ClusterControl will initiate the primary failover job.
  3. Sample the replayLocation and receiveLocation on all available nodes to determine the most advanced node.
  4. Promote the most advanced node as the new primary.
  5. Stop replicas.
  6. Verify the synchronization state with pg_rewind.
  7. Restarting replicas with the new primary.

If the replica promotion fails, ClusterControl will abort the recovery job. User intervention or a cmon service restart is required to trigger the recovery job again.

Attention

When the old primary is available again, it will be forced to shut down and will not be part of the replication. User intervention is required. See further down.

When the old primary comes back online, if the PostgreSQL service is running, ClusterControl will force the shutdown of the PostgreSQL service. This is to protect the server from accidental writes since it would be started without a recovery file (recovery.conf), which means it would be writable. You should expect the following lines will appear in postgresql-{day}.log:

2019-11-27 05:06:10.091 UTC [2392] LOG: database system is ready to accept connections
2019-11-27 05:06:27.696 UTC [2392] LOG: received fast shutdown request
2019-11-27 05:06:27.700 UTC [2392] LOG: aborting any active transactions
2019-11-27 05:06:27.703 UTC [2766] FATAL: terminating connection due to administrator command
2019-11-27 05:06:27.704 UTC [2758] FATAL: terminating connection due to administrator command
2019-11-27 05:06:27.709 UTC [2392] LOG: background worker "logical replication launcher" (PID 2419) exited with exit code 1
2019-11-27 05:06:27.709 UTC [2414] LOG: shutting down
2019-11-27 05:06:27.735 UTC [2392] LOG: database system is shut down

The PostgreSQL was started after the server was back online around 05:06:10 but ClusterControl performs a fast shutdown 17 seconds after that around 05:06:27. If this is something that you would not want it to be, you can disable node recovery for this cluster momentarily.

 

Add Node

Adds a new node by creating or importing it into the running database cluster. At the moment, this feature is available for 3 cluster types:

  • Galera Cluster – You may add a new node, or import an existing database node into the cluster.
  • Redis – Only adding a new Redis replica is supported.
  • Elasticsearch – Only adding a new Elasticsearch master, data, master-data or coordinator node is supported.

Galera Cluster

Percona XtraDB ClusterMariaDB (Galera Cluster)

Adds a new or existing database node into the cluster. You can scale out your cluster by adding mode database nodes. The new node will automatically join and synchronize with the rest of the cluster.

Create a database node

If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH. For the database version of the new node, ClusterControl will attempt to match the exact database version of the cluster.

Field Description
Node Configuration
Data directory
  • MySQL data directory that is going to be set up on the target node.
Galera segment
  • Exclusive for Galera Cluster. Specify a different integer other than the current segment if you want to split the target node into another Galera segment.
Configuration template
  • Choose a MySQL configuration template for the new node. The configuration templates will be loaded from /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
Install software
  • On – Installs MySQL Server packages. The packages will follow the primary node’s repository and vendor.
  • Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
Disable firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable SELinux/AppArmor
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Advanced Settings
Rebuild from a backup
  • On – Select an existing backup to initially stage the replica from instead of rebuilding directly from a primary node.
  • Off – ClusterControl will stream a backup from the chosen primary for data staging on the replica host.
Include in LoadBalancer set (if exists)
  • On – ClusterControl will attempt to add this node to the current load balancer managed by ClusterControl.
  • Off – This configuration task will be skipped.
Add Node
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

Import a database node

Imports an existing replication node into ClusterControl. Use this feature if you have added a replica manually to your cluster and want it to be detected/managed by ClusterControl. ClusterControl will then detect the new database node as being part of the cluster and starts to manage and monitor it as with the rest of the cluster nodes. This is useful if a replica node has been configured outside ClusterControl e.g, through Puppet, Ansible, or manual way.

Field Description
Node Configuration
Port
  • MySQL port. The default is 3306. This port must be reachable by ClusterControl.
Include in LoadBalancer set (if exists)
  • On – ClusterControl will attempt to add this node to the current load balancer managed by ClusterControl.
  • Off – This configuration task will be skipped.
Add Node
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

Redis

Adds a new Redis replica node to join the cluster. If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH.

Field Description
Node configuration
Port
  • Redis port. The default is 6379. This port must be reachable by ClusterControl.
Redis sentinel port
  • Redis Sentinel port. The default is 26379. This port must be reachable by ClusterControl.
Install software
  • On – Installs Redis packages. The packages will follow the primary node’s repository and vendor.
  • Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
Disable firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable SELinux/AppArmor
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Add Node
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

Elasticsearch

Adds a new Elasticsearch master, data, master-data or coordinator node to join the selected cluster. If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH.

Field Description
Node configuration
Port
  • Elasticsearch HTTP port. The default is 9200. This port must be reachable by ClusterControl.
Install software
  • On – Installs Elasticsearch packages. The packages will follow the primary node’s repository and vendor.
  • Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
Disable firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable SELinux/AppArmor
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Add Node
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
  • There is an additional option after the node’s IP address or hostname value is accepted:
    • Node role – Choose from the dropdown the Elasticsearch cluster role of the new database node. You may choose from master-data (master and data at the same time), master, data, and coordinator.

 

Add Replication Node

MySQL/MariaDB Replication/Galera Cluster

Add replication node requires at least one existing node already configured with binary logs with GTID enabled. This is also true for the Galera Cluster. The following must be true for the primaries:

  • At least one primary among the Galera nodes.
  • MySQL/MariaDB GTID must be enabled.
  • log_slave_updates must be enabled.
  • Primary’s MySQL port is accessible by ClusterControl and replicas.
  • To enable binary logs for Galera Cluster, go to ClusterControl → Nodes → choose the database server → Enable Binary Logging.

For the replica, you would need a separate host or VM, with or without MySQL installed. If you do not have MySQL installed, and choose ClusterControl to install MySQL on the replica host, ClusterControl will perform the necessary actions to prepare the replica. This includes configuring the root password, creating the replication user, configuring MySQL, starting the service, and starting the replication link. The MySQL or MariaDB packages are based on the chosen vendor, for example, if you are running a Percona XtraDB Cluster, ClusterControl will prepare the replica using Percona Server. Before the deployment, the following must be true for the replica node:

  • The replica node must be accessible using passwordless SSH from the ClusterControl server.
  • MySQL port (default 3306) and netcat port 9999 on the replica host are open for connections.
  • The replica node must use the same operating system as the primary node/cluster.

Create a Replication Node

MySQLPercona  ServerPercona XtraDB ClusterMariaDB (Server and Galera Cluster)

Create a new asynchronous replication node. For a Galera Cluster, this is not a new Galera database node, instead, it is a read-only replica via MySQL/MariaDB replication. The replica can be set up through backup streaming from the primary to the replica, or restoration of existing full database backup.

Field Description
Node configuration
Netcat port
  • The default is “9999,9990-9998”, which means port 9999 will be preferred if available. If not available, it will start from any available port from the defined range. This setting is critical if you have many overlapped backup jobs for multiple nodes or clusters. Go to Backup Settings to change this value.
  • This port must be reachable from the ClusterControl host to the primary server.
Port
  • MySQL port. The default is 3306. This port must be reachable by ClusterControl.
Install software
  • On – Installs MySQL Server packages. It will be based on the primary node’s repository and vendor. For example, if you are running on Percona XtraDB Cluster, ClusterControl will set up a standalone Percona XtraDB Cluster node as the replica.
  • Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
Disable firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable SELinux/AppArmor
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Advanced settings
Rebuild from a backup
  • On – Select an existing backup to initially stage the replica from instead of rebuilding directly from a primary node.
  • Off – ClusterControl will stream a backup from the chosen primary for data staging on the replica host.
Include in LoadBalancer set (if exists)
  • On – ClusterControl will attempt to add this node to the current load balancer managed by ClusterControl.
  • Off – This configuration task will be skipped.
Delay the replica node
  • On – Sets up a delayed replica. A delayed replica is useful for disaster recovery in case an accidental DML/DDL is executed on the primary.
  • Off – Skips this configuration part.
Semi-synchronous replication
  • On – ClusterControl will configure a semi-synchronous replication.
  • Off – ClusterControl will configure an asynchronous replication.
Version
  • The database version that will be installed. ClusterControl will attempt to match the current version of the primary server/cluster. This is informational and not configurable.
Add Node
Primary node
  • Select a primary node. Only database nodes with binary logs enabled will be listed here.
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

 

Import a Replication Node

MySQLPercona  ServerPercona XtraDB ClusterMariaDB (Server and Galera Cluster)

Imports an existing replication node into ClusterControl. Use this feature if you have added a replica manually to your cluster and want it to be detected/managed by ClusterControl. ClusterControl will then detect the new database node as being part of the cluster and start to manage and monitor it as with the rest of the cluster nodes. This is useful if a replica node has been configured outside of ClusterControl e.g, through Puppet, Ansible, or manual way.

Field Description
Node Configuration
Port
  • MySQL port. The default is 3306. This port must be reachable by ClusterControl.
Include in LoadBalancer set (if exists)
  • On – ClusterControl will attempt to add this node to the current load balancer managed by ClusterControl.
  • Off – This configuration task will be skipped.
Add Node
Node
  • Specify the replica IP address or hostname. This host must be accessible via passwordless SSH from the ClusterControl host.
  • The node must be up and running and allow the “cmon” user to connect from the controller, with at least SELECT, PROCESS, SUPER, REPLICATION CLIENT, SHOW DATABASES, RELOAD privileges on all databases. For complete management functionality, ALL PRIVILEGES WITH GRANT OPTION is needed.

PostgreSQL/TimescaleDB

PostgreSQL replication replica requires at least one primary node. The following must be true for the primary:

  • At least one primary under the same cluster ID.
  • Only PostgreSQL 9.6 and later are supported.
  • Primary’s PostgreSQL port is accessible by ClusterControl and replica hosts.

For replica, you would need a separate host or VM, with or without PostgreSQL installed. If you do not have a PostgreSQL installed, and choose ClusterControl to install the PostgreSQL on the host, ClusterControl will perform the necessary actions to prepare the replica, for example, create a replica user, configure PostgreSQL, start the server, and also start the replication. Before the deployment, you must perform the following actions:

  • The replica node must be accessible using passwordless SSH from the ClusterControl server.
  • The PostgreSQL port (default 5432) on the replica is open for connections for at least the ClusterControl server and the other members in the cluster.

To prepare the PostgreSQL configuration file for the replica, go to ClusterControl → Manage → Configurations → Template Configuration files. Later, specify this template file when adding a replica.

Create a Replication Node

PostgreSQLTimescaleDB

The replica will be set up by streaming a pg_basebackup backup from the primary node to the replica node. The primary node’s configuration will be altered to allow the replica node to join.

Field Description
Node configuration
Port
  • PostgreSQL port. The default is 5432. This port must be reachable by ClusterControl.
Use package default for datadir
  • On – ClusterControl will use the default path for the PostgreSQL data directory.
  • Off – Specify a custom PostgreSQL data directory path.
Install software
  • On – Installs PostgreSQL server packages. The packages will follow the primary node’s repository and vendor.
  • Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
Advanced settings
Include in LoadBalancer set (if exists)
  • On – ClusterControl will attempt to add this node to the current load balancer managed by ClusterControl.
  • Off – This configuration task will be skipped.
Instance name
  • Specifies an instance name for this configuration. The name identifies this database cluster (instance) for various purposes. The cluster name appears in the process title for all server processes in this cluster. Moreover, it is the default application name for a standby connection.
Semi-synchronous replication
  • On – ClusterControl will configure a semi-synchronous replication.
  • Off – ClusterControl will configure an asynchronous replication.
Add Node
Primary node
  • Select a primary node from the dropdown.
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

Import a Replication Node

PostgreSQLTimescaleDB

Imports an existing replication node into ClusterControl. Use this feature if you have added a replica manually to your cluster and want it to be detected/managed by ClusterControl. ClusterControl will then detect the new database node as being part of the cluster and start to manage and monitor it as with the rest of the cluster nodes. This is useful if a replica node has been configured outside of ClusterControl e.g, through Puppet, Ansible, or manual way.

Field Description
Node Configuration
Port
  • PostgreSQL port. The default is 5432. This port must be reachable by ClusterControl.
Logfile path
  • Specify the log file path if logging_collector is set to OFF.
Include in LoadBalancer set (if exists)
  • On – ClusterControl will attempt to add this node to the current load balancer managed by ClusterControl.
  • Off – This configuration task will be skipped.
Add Node
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

Bootstrap Cluster

Percona XtraDB ClusterMariaDB (Galera Cluster)

Bootstrap cluster refers to the process of bootstrapping or initializing a new Galera Cluster. When you’re setting up a new Galera Cluster or performing a full backup restoration, you typically need to bootstrap it, which involves designating one node as the initial primary component of the cluster. This bootstrap node serves as the starting point for the cluster, and other nodes will then join this initial node to form the cluster.

The following actions will be performed:

  1. The cluster will be initialized from the selected node.
  2. All nodes will be stopped unless they are already stopped.
  3. When the bootstrap command is successful, the selected node will be Synced. The rest of the nodes will be started as joiners, one node at a time.
Field Description
Bootstrap node
  • Choose a database node from the list to be the reference node. If you are unsure, you may choose Auto Select from the dropdown where ClusterControl will determine the most suitable node to bootstrap based on the Galera node’s sequence number.
Graceful shutdown timeout (in seconds)
  • ClusterControl will wait for this timeout to finish before forcing the node to stop.
Force stop the nodes (after shutdown time)
  • Toggles On to force stop the database node if it takes longer than the shutdown time, specified in Graceful shutdown timeout (in seconds), defaulting to 1800 seconds (30 minutes). Commonly useful in the case of forcing a database cluster to bootstrap when database locking or long-running transactions are happening.
Clear MySQL datadir on Joining nodes
  • Toggles On to clear the MySQL data directory on the joining nodes. This is often necessary after a restoration.

Audit Logging

MariaDB ReplicationMariaDB (Galera Cluster)PostgreSQLTimescaleDB

Only for MariaDB-based and PostgreSQL-based clusters. Enable audit logging on the cluster using policy-based monitoring and logging of connection and query activity for security and compliance purposes. It enables database administrators to track and monitor database operations, such as logins, queries, data modifications, and schema changes. This plugin records these activities in a structured format, making it easier to analyze and audit database usage. This feature will enable audit logging on all nodes in the cluster.

For MariaDB audit logging, the following fields are presented:

Field Description
Log Path
  • The filename to store the audit log in the log directory /var/log/mysql/. Alternatively, it can contain a path relative to the data directory or an absolute path. The default is server_audit.log.
Rotation Size in MB
  • Log file size in MB before log rotation happens. Changing this default value will require a cluster restart.
Rotations
  • Number of log files to keep after rotation.
Events
  • Specify MariaDB audit events that you would like to capture. ClusterControl preloads the audit events as you type. Multiple values are allowed.
Exclude Users
  • Exclude the specified MariaDB user(s) from the auditing. ClusterControl preloads all database users in a dropdown. Multiple values are allowed.

For PostgreSQL/TimescaleDB audit logging, the following fields are presented:

Field Description
Events
  • Specify PostgreSQL audit events that you would like to capture. ClusterControl preloads the audit events as you type. Multiple values are allowed.

Add Load Balancer

Manages deployment of load balancers-related software (HAProxy, ProxySQL, PgBouncer and MaxScale), virtual IP address (Keepalived), and Garbd. For the Galera Cluster, it is also possible to add the Galera arbitrator daemon (Garbd) through this interface.

ProxySQL

MySQL-based ClustersMariaDB-based Clusters

Available for MySQL-based clusters. By default, ClusterControl deploys ProxySQL in read/write split mode – your read-only traffic will be sent to replicas while your writes will be sent to a writable primary by creating two host groups. In case of primary failure, ProxySQL will detect the new writable primary and route writes to it automatically without any user intervention. ProxySQL offers a wide range of features and capabilities that make it suitable for various scenarios, including improving the scalability, availability, performance, and security of MySQL database infrastructures.

Create ProxySQL Service

Deploy a new ProxySQL server. You can use an existing database server or another new host by specifying the hostname or IPv4 address. With two or more ProxySQL nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.

Field  Description
Where to install
Server Address
  • Specify the hostname or IP address of the host. This host must be accessible via passwordless SSH from the ClusterControl node.
Admin Port
  • ProxySQL administration port. The default is 6032.
Listening Port
  • ProxySQL for MySQL load balancing. The default is 6033. This is where the applications and clients will connect after ProxySQL is activated.
Version
  • Only ProxySQL version 2.x is supported.
ProxySQL Configuration
Disable Firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable AppArmor/SELinux
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Import Configuration
  • Deploys a new ProxySQL based on an existing ProxySQL instance. The source instance must be added first into ClusterControl. Once added, you can choose the source ProxySQL instance from a dropdown list.
Use Native Clustering
  • The ProxySQL server will be created using native ProxySQL clustering. An entry will be created in the proxysql_server table.
  • It is recommended to enable this if you would like to have more than one ProxySQL node. Port 6032 must be reachable between all ProxySQL nodes.
Configuration
Administration User
  • ProxySQL administration user name.
Administration Password
  • Password for Administration User.
Monitor User
  • ProxySQL monitoring user name.
Monitor Password
  • Password for Monitor User.
Database user (Optional)
Existing User
  • Database username: Choose the existing user name from the dropdown.
  • Database password: Password for DB User.
Create new User
  • Database username: The database user name.
  • Database password: Password for DB Users.
  • Database name: Database name in “database.table” format. To GRANT against all tables, use a wildcard, for example: mydb.*.
  • MySQL privilege(s): ClusterControl will load the privilege name along the keypress. Multiple privileges are possible.
Server instances
Include
  • Toggle to On to include the database node in the load balancer.
Max replication lag
  • How many seconds of replication lags should be allowed before marking the node as unhealthy. The default value is 10.
Max connection
  • Maximum connections are to be sent to the backend servers. It’s recommended to match or lower than the max_connections value of the backend servers.
Weight
  • This value is used to adjust the server’s weight relative to other servers. All servers will receive a load proportional to their weight relative to the sum of all weights. The higher the weight, the higher the priority.
Are you using implicit transactions?
  • Off – If you rely on SET AUTOCOMMIT=0 to create a transaction.
  • On – If you explicitly use BEGIN or START TRANSACTION to create a transaction. Choose Off if you are unsure of this part.

After the ProxySQL installation finishes, the node will be listed under the Nodes page where you can manage and view the ProxySQL status, variables and settings.

Import ProxySQL

If you already have ProxySQL installed in your setup, you can easily import it into ClusterControl to benefit from monitoring and management of the instance.

Field  Description
ProxySQL location
Server Address
  • Specify the hostname or IP address. You can choose from the dropdown list and type in the new host.
Admin Port
  • ProxySQL administration port. The default is 6032.
Listening Port
  • ProxySQL load-balanced port. The default is 6033.
ProxySQL Configuration
Import Configuration
  • Adds an existing ProxySQL instance and imports the configuration from another existing instance. The source instance must be added first into ClusterControl. Once added, you can choose the source ProxySQL instance from a dropdown list.

After the ProxySQL import operation finishes, the node will be listed under the Nodes page where you can manage and view the HAProxy connection status.

HAProxy

MySQL-based clustersMariaDB-based clustersPostgreSQL-based clustersTimescaleDB-based clusters

Installs and configures a HAProxy instance. ClusterControl will automatically install and configure HAProxy, install mysqlcheck script (for MySQL health checks) on each of the database nodes as part of xinetd service, and start the HAProxy service. If you set up read/write splitting for primary-replica replication, there will be two listening ports configured (one for read-write and another one for read-only connections).

This feature is idempotent, you can execute it as many times as you want and it will always reinstall everything as configured.

Create HAProxy Service

Deploy a new HAProxy server. You can use an existing database server or another new host by specifying the hostname or IPv4 address. With two or more HAProxy nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.

Field  Description
Where to install
Server Address
  • Select which host to add the load balancer. If the host is not provisioned by ClusterControl, type in the IP address or hostname. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
Policy
  • Choose one of these load-balancing algorithms:
    • leastconn – The server with the lowest number of connections receives the connection.
    • round-robin – Each server is used in turns, according to their weights.
    • source – The same client IP address will always reach the same server as long as no server goes down.
Listen Port (Read/Write)
  • Specify the HAProxy listening port. This will be used as the load-balanced database connection port for read/write connections.
Install for read/write splitting (master-slave replication)
  • Toggled On if you want HAProxy to use another listener port for read-only. A new text box for Listen Port (Read Only) will appear right next to the Listen Port (Read/Write) text box. You can then specify the port for read-only database connection port.
Security configuration
Disable Firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable AppArmor/SELinux
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Installation settings
Overwrite existing /usr/local/sbin/mysqlchk on target
  • Toggle On if you want to overwrite any existing health check script on the load balancer node.
Advanced settings
Stats Socket
  • Specify the path to bind a UNIX socket for HAProxy statistics. See stats socket.
Admin Port
  • Port to listen to the HAProxy statistic page.
Admin User
  • Admin username to access the HAProxy statistic page. See stats auth.
Admin Password
Backend Name
  • Name for the backend. No whitespace or tab allowed.
Timeout Server (seconds)
  • Sets the maximum inactivity time on the server side. See timeout server.
Timeout Client (seconds)
  • Sets the maximum inactivity time on the client side. See timeout client.
Max Connections Frontend
  • Sets the maximum per-process number of concurrent connections to the HAProxy instance. See maxconn.
Max Connections Backend/per instance
  • Sets the maximum per-process number of concurrent connections per backend instance. See maxconn.
xinetd allow connections from
  • The specified subnet will be allowed to access the health check script mysqlchk or mysqlchk_rw_split for read/write splitting (MySQL-based clusters) or postgreschk and postgreschk_rw_split (PostgreSQL-based clusters) as xinetd service, which listens on port 9200 on every database node. To allow connections from all IP addresses, use the default value, “0.0.0.0/0”.
Server instances
Include
  • Select servers in your cluster that will be included in the load balancing set.
Advanced options → Role
  • Supported roles:
    • Active – The server is actively used in load balancing.
    • Backup – The server is only used in load balancing when all other non-backup servers are unavailable.
Advanced options → Connection Address
  • Choose the IP address where HAProxy should be listening on the host.

After the HAProxy installation finishes, the node will be listed under the Nodes page where you can manage and view the HAProxy connection status.

Import HAProxy

Import an existing HAproxy instance. With two or more HAProxy nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.

Field  Description
Configuration
Server Address
  • Select which host to add the load balancer. If the host is not provisioned by ClusterControl, type in the IP address or hostname. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
Port
  • Port to listen to HAProxy admin/statistic page (if enabled).
Advanced settings
Admin User
  • Admin username to access the HAProxy statistic page. See stats auth.
Admin Password
cmdline
  • Specify the command line that ClusterControl should use to start the HAProxy service. You can verify this by using ps -ef | grep haproxy and retrieving the full command of how the HAProxy process started. Copy the full command line and paste it into the text field.
LB Name
  • Name for the backend. No whitespace or tab allowed.
HAProxy config
  • Location of HAProxy configuration file (haproxy.cfg) on the target node.
Stats socket
  • Specify the path to bind a UNIX socket for HAProxy statistics. See stats socket.
  • Usually, HAProxy writes the socket file to /var/run/haproxy.socket . This is needed by ClusterControl to monitor HAProxy. This is usually defined in the haproxy.cfg file.

After the HAProxy import operation finishes, the node will be listed under the Nodes page where you can manage and view the HAProxy connection status.

Note

You will need an admin user/password set in the HAProxy configuration otherwise you will not see any HAProxy stats.

Keepalived

Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.

Keepalived requires two or more HAProxy, ProxySQL, or MariaDB MaxScale instances to provide virtual IP address failover. By default, the virtual IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will automatically fail over to ‘Keepalived 2’ accordingly.

Note

To understand how ClusterControl configures Keepalived, see this blog post How ClusterControl Configures Virtual IP and What to Expect During Failover.

Deploy Keepalived

Deploy a new Keepalived instance. You need at least two or more HAProxy, ProxySQL, or MariaDB MaxScale instances before you can proceed with this deployment.

Field  Description
Load balancer type
  • Supported load balancer type to integrate with Keepalived – HAProxy, ProxySQL, and MaxScale. For ProxySQL, you can deploy more than 2 Keepalived instances.
Keepalived
  • Select the existing load balancer hosts from the dropdown. You need
Add Keepalived Instance
  • Shows additional input field for secondary Keepalived node.
Remove Keepalived Instance
  • Hides additional input field for secondary Keepalived node.
Virtual IP
  • Assign a virtual IP address. The IP address should not exist in any node in the cluster to avoid conflict.
Network Interface
  • Specify a network interface to bind the virtual IP address of the load balancer host. This interface must be able to communicate with other Keepalived instances and support IP protocol 112 (VRRP) and unicast.

After the Keepalived installation finishes, the node will be listed under the Nodes page where you can view the Keepalived status and configuration.

Import Keepalived

Import an existing Keepalived instance. You need at least two or more HAProxy, ProxySQL, or MariaDB MaxScale instances before you can proceed with this import.

Field  Description
Keepalived 1
  • Specify the IP address or hostname of the primary Keepalived node.
Add Keepalived Instance
  • Shows additional input field for secondary Keepalived node
Remove Keepalived Instance
  • Hides additional input field for secondary Keepalived node.
Virtual IP
  • Assign a virtual IP address. The IP address should not exist in any node in the cluster to avoid conflict.

After the Keepalived import job finishes, the node will be listed under the Nodes page where you can view the Keepalived status and configuration.

MariaDB MaxScale

MySQL-based clustersMariaDB-based clusters

MaxScale is an intelligent proxy that allows the forwarding of database statements to one or more database servers using complex rules, a semantic understanding of the database statements, and the roles of the various servers within the backend cluster of databases.

You can deploy or import the existing MaxScale node as a load balancer and query router for your Galera Cluster and MySQL/MariaDB replication. For new deployment using ClusterControl, by default, it will create two production services:

  • RW – Implements read-write split access.
  • RR – Implements round-robin access.
Attention

ClusterControl performs MariaDB MaxScale installation via direct package download without using the MariaDB repository. The package download URL is kept inside /usr/share/cmon/templates/packages.conf on the ClusterControl server under the [maxscale] section. Occasionally, the provided URL will be outdated as MariaDB releases a new minor version, and removes the older minor version for a specific MaxScale major version. If that is the case, a manual modification is required to update the download link in this file. The updated download URL is available on the MariaDB MaxScale website.

Deploy MaxScale

Deploy MariaDB MaxScale as MySQL/MariaDB load balancer. With two MaxScale nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.

Field  Description
Where to install
Server Address
  • The IP address of the node where MaxScale will be installed. ClusterControl must be able to perform passwordless SSH to this host.
Configure MaxScale
Threads
  • How many threads MaxScale is allowed to use.
RR Port (Port for round-robin listener)
  • Port for round-robin listener. The default is 4006.
RW Port (Port for read/write split listener)
  • Port for the read-write split listener. The default is 4008.
Disable Firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable AppArmor/SELinux
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Configuration
MaxScale admin username
  • MaxScale admin username. The default is ‘admin’.
MaxScale admin password
  • MaxScale enforces that the admin password for admin user ‘admin’ is ‘mariadb’. If you want to change or use another password you must create another user. The ‘admin’ user can later be dropped.
MaxScale MySQL Username
  • MariaDB/MySQL user that will be used by MaxScale to access and monitor the MariaDB/MySQL nodes.
MaxScale MySQL Password
  • Password of MaxScale MySQL Username.
Server instances
Include
  • Select MySQL/MariaDB servers to be included in the load balancing set.

After the MariaDB MaxScale installation finishes, the node will be listed under the Nodes page where you can manage and run the web-based maxctrl commands.

Import MaxScale

If you already have MaxScale installed in your setup, you can easily import it into ClusterControl to benefit from health monitoring and access to MaxAdmin – MaxScale’s CLI from the same interface you use to manage the database nodes. The only requirement is to have passwordless SSH configured between the ClusterControl node and the host where MaxScale is running.

Field  Description
MaxScale Address
  • The IP address of the existing MaxScale server.

After the MariaDB MaxScale import job finishes, the node will be listed under the Nodes page where you can manage and run the web-based maxctrl commands.

Garbd

Percona XtraDB ClusterMariaDB (Galera Cluster)

Exclusive for Galera Cluster. Galera arbitrator daemon (garbd) can be installed to avoid network partitioning or split-brain scenarios.

Field  Description
Server Address
  • Manually specify the new garbd hostname or IP address or select a host from the list. That host cannot be an existing Galera node.
CmdLine
  • Garbd command line to start garbd process on the target node.
Attention

ClusterControl does not support and allow garbd deployment on a server where ClusterControl is also running and hosted. There is a tendency that the existing MySQL packages will be removed which is managed by the software packaging tools.

PgBouncer

PostgreSQL-based clustersTimescaleDB-based clusters

PgBouncer is a lightweight connection pooler for PostgreSQL. It reduces PostgreSQL resource consumption (memory, backends, fork) and supports online restart or upgrade without dropping client connections. Using ClusterControl, you can manage PgBouncer on one or more nodes, manage multiple pools per node and support 3 pool modes:

  • session (default): When a client connects, a server connection will be assigned to it for the whole duration the client stays connected. When the client disconnects, the server connection will be put back into the pool.
  • transaction: A server connection is assigned to a client only during a transaction. When PgBouncer notices that the transaction is over, the server connection will be put back into the pool.
  • statement: The server connection will be put back into the pool immediately after a query completes. Multi-statement transactions are disallowed in this mode as they would break.

Deploy PgBouncer

ClusterControl only supports deploying PgBouncer on the same host as the PostgreSQL host. When deploying a PgBouncer node, ClusterControl will deploy using the following default values:

  • Command: /usr/bin/pgbouncer /etc/pgbouncer/pgbouncer.ini
  • Port: 6432
  • Configuration file: /etc/pgbouncer/pgbouncer.ini
  • Logfile: /var/log/pgbouncer/pgbouncer.log
  • Auth file: /etc/pgbouncer/userlist.txt
  • Pool mode: session
Field Description
Authentication
PgBouncer Admin User
  • PgBouncer admin username. ClusterControl will create the admin user if specified.
  • If empty, ClusterControl will generate an admin user called “pgbadmin”.
PgBouncer Admin Password
  • Password for PgBouncer Admin User.
Add nodes
PgBouncer Node
  • Select an existing PostgreSQL node from the dropdown. Multiple selection is allowed.
Listen Port
  • Listening port for PgBouncer. The default is 6432.

After the PgBouncer installation finishes, the node will be listed under the Nodes page where you can manage the connection pools.

Import PgBouncer

ClusterControl only supports importing PgBouncer on the same host as the PostgreSQL host.

Field Description
Authentication
PgBouncer Admin User
  • PgBouncer admin username.
PgBouncer Admin Password
  • Password for PgBouncer admin user.
Add nodes
PgBouncer Node
  • Select an existing PostgreSQL node from the dropdown.
Listen Port
  • Listening port for PgBouncer.

After the PgBouncer import operation finishes, the node will be listed under the Nodes page where you can manage the connection pools.

Configure WAL

PostgreSQLTimescaleDB

The WALs are the REDO logs in PostgreSQL. REDO logs contain all changes that were made in the database and they are used for replication, recovery, online backup, and point-in-time recovery (PITR). Any changes that have not been applied to the data pages can be redone from the REDO logs.

This step is not mandatory, but is extremely important for a robust replication setup, as it is necessary to avoid the main server recycling old WAL files that have not yet been applied to the replica. If this occurs we will need to recreate the replica from scratch. Enabling write-ahead logging (WAL) makes it possible to support online backup and point-in-time recovery in PostgreSQL.

Field Description
Archive Mode
  • Off: WAL logs are not archived, thus point-in-time recovery will not be possible.
  • On: WAL logs are archived only if the node is a primary mode.
  • Always: WAL logs are archived no matter whether the node is in primary or replica mode.
Compressed WAL Archive
  • Whether to enable compression if WAL logs are archived. Default is enabled.
Custom WAL Archive Directory
  • Path to the WAL archive directory. If empty, the default path is /var/lib/pgsql/{major_version}/wal_archive.
Apply to
  • WAL can be enabled/archived on all database nodes, or on an individual database node. Select one from the dropdown list.
Warning

Changing the current value leads to loss of collected continuous WAL archive and thus loss of time frame to do point-in-time recovery (PITR).

Cluster-Cluster Replication

Percona XtraDB ClusterMariaDB (Galera Cluster)PostgreSQLTimescaleDB

This feature allows you to create a new cluster that will be replicating from this cluster. One common use case is for disaster recovery by having a hot standby site/cluster that can take over when the main site/cluster has failed. Clusters can be rebuilt with an existing backup or by streaming from a primary on the source cluster.

For MySQL-based clusters, ClusterControl will configure asynchronous MySQL replication from a primary cluster to a replica cluster. For PostgreSQL-based clusters, ClusterControl will configure asynchronous streaming replication between a primary cluster to a replica cluster. For the Galera Cluster, the asynchronous replication can be optionally set up with uni-directional (primary → replica) or bi-directional (primary ↔ replica) replication.

There are two ways ClusterControl can create a replica cluster:

  1. Streaming data from the primary cluster – Stream the data from a primary using hot backup tools e.g., Percona Xtrabackup, and MariaDB Backup or pg_basebackup. You need to pick one node of the source cluster to replicate from.
  2. Stage cluster from a backup – Choose an existing full backup from the dropdown list. For MySQL or MariaDB, if none is listed, take a full backup of one of the database nodes in your cluster that have binary logging enabled.

Once the above options have been selected, the cluster deployment wizard will appear similar to deploying a new cluster. See Deploy Database Cluster.

A replica cluster will appear in the database cluster list after deployment finishes. You will notice the replica cluster entry has a green footer showing that it is a replica of another cluster. If it is a bi-directional replication, you would see a double-headed arrow (single-headed arrow for uni-directional replication) with the cluster name and ID that it is replicating from, indicating the cluster-cluster replication is now active. You may also see from the topology of the cluster information card (just roll over on the cluster name to see the cluster information card appear) and you should see the topology view similar to the example below:

Note

We highly recommend users to enable cluster-wide read-only on the replica cluster. Disable read-only only when promoting the replica cluster as the new primary cluster.

Clone Cluster

Percona XtraDB ClusterMariaDB (Galera Cluster)

This feature allows you to create, in one click, an exact copy of your Galera Cluster onto a new set of hosts. The most common use case for cloning a deployment is for setting up a staging deployment for further development and testing. Cloning is a ‘hot’ procedure and does not affect the operations of the source cluster.

A clone will be created for this cluster. The following procedure applies:

  • Create a new cluster consisting of one node.
  • Stage the new cluster with SST (it is now closed).
  • Nodes will be added to the cloned cluster until the cloned cluster size is reached.
  • Query Monitor settings and settings for Cluster Recovery and Node Recovery options are not cloned.
  • The my.cnf file may not be identical on the cloned cluster.
Field Description
Configuration
Cluster Name
  • The cloned cluster name.
Repository
  • Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by the database vendor repository.
  • Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
Disable firewall
  • On – Disables firewall (recommended).
  • Off – This configuration task will be skipped. If the firewall is enabled, make sure you have configured the necessary ports.
Disable SELinux/AppArmor
  • On – Disables SELinux or AppArmor (recommended).
  • Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
Add Host
Node
  • IP address or hostname of the target node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
  • If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.

Upgrades

Performs database and load-balancer-related package upgrades.

Warning

A database upgrade is a risky operation and ClusterControl does not support an upgrade rollback operation. Please perform this activity on a test/staging cluster first before proceeding with the actual upgrade. Upgrades should only be performed when there is as little traffic as possible on the cluster. Performing a full database backup (or taking a full VM snapshot) before an upgrade will increase the chance of recovery in case a rollback is needed. Taking a backup (before the upgrade) or restoring a backup (to rollback) is not part of this particular upgrade job.

ClusterControl will perform the software upgrade based on what is available in the package repository for the particular vendor. Having said that, if the database vendor repository is not up-to-date, or can not be updated to the latest repository (e.g, non-Internet environment, or outbound connections behind a pre-configured HTTP proxy), ClusterControl will not be able to perform the upgrade since there will be no new packages appear in the server’s repository list.

Minor Upgrade

MySQL-based clustersMariaDB-based clustersMongoDB Replica SetMongoDB Sharded ClusterPostgreSQL-based clustersTimescaleDB-based clusters

Performs minor software upgrades for database and load balancer software. Minor upgrade version formatting is notably different depending on the database type and vendor:

  • MySQL/Percona Server for MySQL – 8.0.x to 8.0.y
  • MariaDB – 10.11.x to 10.11.y
  • MongoDB – 5.0.x to 5.0.y
  • PostgreSQL/TimescaleDB/EntepriseDB – 15.x to 15.y
  • ProxySQL – 2.6.x to 2.6.y
  • PgBouncer – 1.22.x to 1.22.y
  • HAProxy – 2.1.x to 2.1.y
  • MariaDB MaxScale – 22.08.x to 22.08.y

For a primary-replica replication setup, ClusterControl will perform the upgrade starting from the replica/secondary nodes and will eventually perform the upgrade on the primary node of a cluster (secondary first, primary last). During the eventual primary node upgrade, expect a service disruption to the database cluster service until the primary node is online again after restarting. For a multi-primary setup, ClusterControl will upgrade any database node in a random order, one node at a time. If a node fails to be upgraded, the upgrade job is aborted and manual intervention is required to recover or reinstall the node. Due to this fact, it is important to have a proper maintenance window, which should only be performed when there is as little traffic as possible to the cluster.

Field  Description
Upgrade
  • Triggers an upgrade job using the corresponding package manager.
  • Mintor upgrades are performed online on the selected node(s), one node at a time. The node will be stopped, then the software will be updated, and then the node will be started again. If a node fails to upgrade, the upgrade process is aborted and manual intervention is required to recover or reinstall the node.
  • It is a good idea to perform Check for upgrades first before performing this operation.
Check for upgrades
  • Triggers a job to check for any new versions of database-related packages. It is recommended to perform this operation before performing an actual upgrade.
Select nodes
  • Toggle all nodes that you want to upgrade. Clicking on Database nodes will select all nodes in that cluster.

Major Upgrade

PostgreSQL-based clustersTimescaleDB-based clusters

Performs major software upgrades for the database software. Only one major version upgrade is supported at a time, for example, from PostgreSQL 13.x to 14.x. At the moment, only PostgreSQL-based clusters are supported. If you want to upgrade from PostgreSQL 12.x to 14.x, you have to upgrade stage-by-stage to PostgreSQL 13.x first, followed by PostgreSQL 14.x.

For a primary-replica replication setup, ClusterControl will perform the upgrade starting from the replica/secondary nodes and will eventually perform the upgrade on the primary node of a cluster (secondary first, primary last). During the eventual primary node upgrade, expect a service disruption to the database cluster service until the primary node is online again after restarting. If a node fails to be upgraded, the upgrade job is aborted and manual intervention is required to recover or reinstall the node. Due to this fact, it is important to have a proper maintenance window, which should only be performed when there is as little traffic as possible to the cluster.

Field  Description
Configuration
Vendor
  • Detected current database vendor. This is informational and not configurable.
Current version
  • Detected current database version. This is informational and not configurable.
Upgrade version
  • Available upgrade version. This is informational and not configurable. ClusterControl only allows one major version upgrade at a time.
Method
  • copy – This is the default method with pg_upgrade. The --copy option is used which will copy ‘files at a time’ which is faster than doing regular backup and restore. This requires disk storage capacity to hold both the old and the new files.
    Warning
    Data stored in user-defined tablespaces is not being copied to the new cluster during an upgrade. It remains in its original file system location but is duplicated into a subdirectory indicating the version number of the new cluster. To manually relocate files that are stored in a tablespace after upgrading, move the files to the new location and update the symbolic links to point to the files.
  • link – This method uses the --link option with pg_ugprade. Hard links will be used instead of copying (--copy) files which is faster and requires no extra disk storage. It is crucial to refrain from starting the old cluster after this step to avert any potential data corruption with the new cluster since they share the data.
  • pgdumpall – This is an alternative method to pg_upgrade and uses pgdumpall to copy files by performing a backup on the old cluster and then restoring it on the new cluster. This requires disk storage capacity to hold the backup for the old and new files.
A major upgrade is performed at your own risk
  • Accept the disclaimer to proceed with the upgrade.
Advanced
Temporary master port
  • The temporary PostgreSQL port of the older version. ClusterControl will reconfigure the PostgreSQL port of the older instances to this port for the duration of the upgrade, since the common port 5432, will be taken over by the newer PostgreSQL instance after the upgrading job completes.
Temporary upgrade candidate port
  • The temporary PostgreSQL port of the newer version while the upgrade job is running. ClusterControl distinguishes every node with a “hostname:port” pair. This provides a temporary entity for the newer database instance for the duration of the upgrade before it takes over the actual PostgreSQL port. This port will be released after the upgrade job is completed.

Articles

Was this article helpful to you? Yes No