Table of Contents
Provides information and management options for database clusters managed by ClusterControl. Clicking on the Clusters word (sidebar menu) will list out all database clusters on the main panel, each of the entries has a real-time summary of the cluster, together with the dropdown management menu (see Cluster Actions):
Each entry will have the following details:
- Cluster name: Configurable during import or deployment, also via the s9s command line.
- Cluster ID: Immutable ID assigned by ClusterControl after it is registered.
- Cluster type: The database cluster type, recognized by ClusterControl. See Supported Databases.
- Nodes: All nodes grouped under this particular cluster. See Nodes.
- Auto recovery: The ClusterControl automatic recovery settings. See Automatic Recovery.
- Load: Last 5 minutes of cluster load average.
… : Every supported database cluster has its own set of dropdown menus. See Cluster Actions.
Choose a database cluster from the expandable list on the left-side menu to drill down on the cluster-specific features. When selecting a cluster, the navigation breadcrumbs (top panel) will reflect with the corresponding cluster.
Cluster Actions
Provides shortcuts to the main cluster functionality. Each database cluster has its own set of functionality as described below:
- MySQL/MariaDB (replication/standalone)
- Galera Cluster
- PostgreSQL/TimescaleDB (streaming replication/standalone)
- MongoDB
- Redis
- Microsoft SQL Server
- Elasticsearch
- Valkey Cluster
MySQL/MariaDB (replication/standalone)
MySQLPercona ServerMariaDB ServerFeature | Description |
---|---|
Add new → Replication node |
|
Add new → Load balancer |
|
Schedule maintenance |
|
Encryption settings → Change SSL Certificate |
|
Encryption settings → SSL encryption |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Audit logging |
|
Readonly |
|
Upgrades |
|
Edit details |
|
Remove cluster |
|
Galera Cluster
MySQLPercona XtraDB ClusterMariaDB (Galera Cluster)Feature | Description |
---|---|
Add new → Node |
|
Add new → Replication node |
|
Add new → Load balancer |
|
Schedule maintenance |
|
Encryption settings → Change SSL Certificate |
|
Encryption settings → Change Galera SSL Certificate |
|
Encryption settings → Galera SSL encryption |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Readonly |
|
Clone/Replicate the cluster → Clone cluster |
|
Clone/Replicate the cluster → Create replica cluster |
|
Find most advanced node |
|
Upgrades |
|
Edit details |
|
Rolling restart |
|
Bootstrap cluster |
|
Stop Cluster |
|
Remove cluster |
|
PostgreSQL/TimescaleDB (streaming replication/standalone)
PostgreSQL TimescaleDBFeature | Description |
---|---|
Add new → Replication node |
|
Add new → Load balancer |
|
Schedule maintenance |
|
Encryption settings → Change SSL Certificate |
|
Encryption settings → SSL encryption |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Audit logging |
|
Create replica cluster |
|
Upgrades |
|
Enable TimescaleDB |
|
Edit details |
|
Remove cluster |
|
MongoDB
MongoDB Replica SetMongoDB Sharded ClusterFeature | Description |
---|---|
Add node |
|
Schedule maintenance |
|
Convert to shard |
|
SSL encryption |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Upgrades |
|
Edit details |
|
Remove cluster |
|
Redis
Redis standalone Redis SentinelFeature | Description |
---|---|
Add node |
|
Schedule maintenance |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Upgrades |
|
Edit details |
|
Remove cluster |
|
Microsoft SQL Server
Microsoft SQL ServerFeature | Description |
---|---|
Add node |
|
Schedule maintenance |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Upgrades |
|
Edit details |
|
Remove cluster |
|
Elasticsearch
Feature | Description |
---|---|
Add Node |
|
Schedule maintenance |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Upgrades |
|
Edit details |
|
Remove cluster |
|
Valkey
Valkey standalone Valkey ClusterFeature | Description |
---|---|
Add node |
|
Schedule maintenance |
|
Cluster/node recovery → Cluster recovery |
|
Cluster/node recovery → Node recovery |
|
Change RPC API token |
|
Upgrades |
|
Edit details |
|
Remove cluster |
|
Automatic Recovery
ClusterControl is programmed with a number of recovery algorithms to automatically respond to different types of common failures affecting your database systems. It understands different types of database topologies and database-related process management to help you determine the best way to recover the cluster. Some topology managers only cover cluster recoveries but you have to handle the node recovery by yourself. ClusterControl supports recovery at both cluster and node levels.
There are two recovery components supported by ClusterControl:
- Cluster – Attempt to recover a cluster to an operational state. Cluster recovery covers recovery attempts to bring up the entire cluster topology. See Cluster Recovery.
- Node – Attempt to recover a node to an operational state. Node recovery covers node recovery issues like if a node was being stopped outside of ClusterControl knowledge, e.g, via user-intervention stop command from SSH console or process killed by OOM process. See Node Recovery.
These two components are the most important things to make sure the service availability is as high as possible.
Node Recovery
ClusterControl can recover a database node in case of intermittent failure by monitoring the process and connectivity to the database nodes. For the process, it works similarly to systemd, where it will make sure the MySQL service is started and running unless you intentionally stopped it via ClusterControl UI.
If the node comes back online, ClusterControl will establish a connection back to the database node and will perform the necessary actions. The following is what ClusterControl would do to recover a node:
- It will wait for systemd/chkconfig/init to start up the monitored services/processes for 30 seconds.
- If the monitored services/processes are still down, ClusterControl will try to start the database service automatically.
- If ClusterControl is unable to recover the monitored services/processes, an alarm will be raised.
If a database shutdown is initiated by the user via ClusterControl, ClusterControl will not attempt to recover the particular node at a later stage. It expects the user to start it back via ClusterControl UI the Start Node or by using the OS command explicitly.
The recovery includes all database-related services like ProxySQL, HAProxy, MaxScale, Keepalived, PgBouncer, Prometheus exporters, and garbd. Special attention to Prometheus exporters where ClusterControl uses a program called daemon
to daemonize the exporter process. ClusterControl will try to connect to the exporter’s listening port for health check and verification. Thus, it’s recommended to open the exporter ports from ClusterControl and Prometheus server to make sure no false alarms during recovery.
Cluster Recovery
ClusterControl understands the database topology and follows best practices in performing the recovery. For a database cluster that comes with built-in fault tolerance like Galera Cluster, NDB Cluster, and MongoDB Replicaset, the failover process will be performed automatically by the database server via quorum calculation, heartbeat, and role switching (if any). ClusterControl monitors the process and makes necessary adjustments to the visualization like reflecting the changes under the Topology view and adjusting the monitoring and management component for the new role e.g, a new primary node in a replica set.
For database technologies that do not have built-in fault tolerance with automatic recoveries like MySQL/MariaDB Replication and PostgreSQL/TimescaleDB Streaming Replication (see further down), ClusterControl will perform the recovery procedures by following the best practices provided by the database vendor. If the recovery fails, user intervention is required, and of course, you will get an alarm notification regarding this.
In a mixed/hybrid topology, for example, an asynchronous replica that is attached to a Galera Cluster or NDB Cluster, the node will be recovered by ClusterControl if cluster recovery is enabled.
Cluster recovery does not apply to standalone MySQL servers. However, it’s recommended to turn on both node and cluster recoveries for this cluster type in the ClusterControl UI.
MySQL/MariaDB Replication
ClusterControl supports recovery of the following MySQL/MariaDB replication setup:
- Primary-replica with MySQL GTID
- Primary-replica with MariaDB GTID
- Primary-primary with MySQL GTID
- Primary-primary with MariaDB GTID
- Asynchronous replica attached to a Galera Cluster
ClusterControl will respect the following parameters when performing cluster recovery:
enable_cluster_autorecovery
auto_manage_readonly
repl_password
repl_user
replication_auto_rebuild_slave
replication_check_binlog_filtration_bf_failover
replication_check_external_bf_failover
replication_failed_reslave_failover_script
replication_failover_blacklist
replication_failover_events
replication_failover_wait_to_apply_timeout
replication_failover_whitelist
replication_onfail_failover_script
replication_post_failover_script
replication_post_switchover_script
replication_post_unsuccessful_failover_script
replication_pre_failover_script
replication_pre_switchover_script
replication_skip_apply_missing_txs
replication_stop_on_error
For more details on each of the parameters, refer to the documentation page.
ClusterControl will obey the following rules when monitoring and managing a primary-replica replication:
- All nodes will be started with
read_only=ON
andsuper_read_only=ON
(regardless of its role). - Only one primary (
read_only=OFF
) is allowed to operate at any given time. - Rely on the MySQL variable
report_host
to map the topology. - If there are two or more nodes that have
read_only=OFF
at a time, ClusterControl will automatically setread_only=ON
on both primaries, to protect them against accidental writes. User intervention is required to pick the actual primary by disabling the read-only.
In case the active primary goes down, ClusterControl will attempt to perform the primary failover in the following order:
- After 3 seconds of primary unreachability, ClusterControl will raise an alarm.
- Check the replica availability, at least one of the replicas must be reachable by ClusterControl.
- Pick the replica as a candidate to be a primary.
- ClusterControl will calculate the probability of errant transactions if GTID is enabled.
- If no errant transaction is detected, the chosen will be promoted as the new primary.
- Create and grant the replication user to be used by replicas.
- Change the primary for all replicas that were pointing to the old primary to the newly promoted primary.
- Start replica and enable read-only.
- Flush logs on all nodes.
If the replica promotion fails, ClusterControl will abort the recovery job. User intervention or a cmon service restart is required to trigger the recovery job again.
When the old primary is available again, it will be started as read-only and will not be part of the replication. User intervention is required.
PostgreSQL/TimescaleDB Streaming Replication
ClusterControl supports recovery of the following PostgreSQL replication setup:
- PostgreSQL Streaming Replication
- TimescaleDB Streaming Replication
ClusterControl will respect the following parameters when performing cluster recovery:
enable_cluster_autorecovery
repl_password
repl_user
replication_auto_rebuild_slave
replication_failover_whitelist
replication_failover_blacklist
For more details on each of the parameters, refer to the documentation page.
ClusterControl will obey the following rules for managing and monitoring a PostgreSQL streaming replication setup:
wal_level
is set toreplica
(orhot_standby
depending on the PostgreSQL version).- The parameter
archive_mode
is set to ON on the primary. - Set
recovery.conf
file on the replica nodes, which turns the node into a hot standby with read-only enabled.
In case the active primary goes down, ClusterControl will attempt to perform the cluster recovery in the following order:
- After 10 seconds of primary unreachability, ClusterControl will raise an alarm.
- After 10 seconds of graceful waiting timeout, ClusterControl will initiate the primary failover job.
- Sample the
replayLocation
andreceiveLocation
on all available nodes to determine the most advanced node. - Promote the most advanced node as the new primary.
- Stop replicas.
- Verify the synchronization state with
pg_rewind
. - Restarting replicas with the new primary.
If the replica promotion fails, ClusterControl will abort the recovery job. User intervention or a cmon service restart is required to trigger the recovery job again.
When the old primary is available again, it will be forced to shut down and will not be part of the replication. User intervention is required. See further down.
When the old primary comes back online, if the PostgreSQL service is running, ClusterControl will force the shutdown of the PostgreSQL service. This is to protect the server from accidental writes since it would be started without a recovery file (recovery.conf
), which means it would be writable. You should expect the following lines will appear in postgresql-{day}.log
:
2019-11-27 05:06:10.091 UTC [2392] LOG: database system is ready to accept connections
2019-11-27 05:06:27.696 UTC [2392] LOG: received fast shutdown request
2019-11-27 05:06:27.700 UTC [2392] LOG: aborting any active transactions
2019-11-27 05:06:27.703 UTC [2766] FATAL: terminating connection due to administrator command
2019-11-27 05:06:27.704 UTC [2758] FATAL: terminating connection due to administrator command
2019-11-27 05:06:27.709 UTC [2392] LOG: background worker "logical replication launcher" (PID 2419) exited with exit code 1
2019-11-27 05:06:27.709 UTC [2414] LOG: shutting down
2019-11-27 05:06:27.735 UTC [2392] LOG: database system is shut down
The PostgreSQL was started after the server was back online around 05:06:10 but ClusterControl performs a fast shutdown 17 seconds after that around 05:06:27. If this is something that you would not want it to be, you can disable node recovery for this cluster momentarily.
Add Node
Adds a new node by creating or importing it into the running database cluster. At the moment, this feature is available for 3 cluster types:
- Galera Cluster – You may add a new node, or import an existing database node into the cluster.
- Redis – Only adding a new Redis replica is supported.
- Elasticsearch – Only adding a new Elasticsearch master, data, master-data or coordinator node is supported.
Galera Cluster
Percona XtraDB ClusterMariaDB (Galera Cluster)Adds a new or existing database node into the cluster. You can scale out your cluster by adding mode database nodes. The new node will automatically join and synchronize with the rest of the cluster.
Create a database node
If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH. For the database version of the new node, ClusterControl will attempt to match the exact database version of the cluster.
Field | Description |
---|---|
Node Configuration | |
Data directory |
|
Galera segment |
|
Configuration template |
|
Install software |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Advanced Settings | |
Rebuild from a backup |
|
Include in LoadBalancer set (if exists) |
|
Add Node | |
Node |
|
Import a database node
Imports an existing replication node into ClusterControl. Use this feature if you have added a replica manually to your cluster and want it to be detected/managed by ClusterControl. ClusterControl will then detect the new database node as being part of the cluster and starts to manage and monitor it as with the rest of the cluster nodes. This is useful if a replica node has been configured outside ClusterControl e.g, through Puppet, Ansible, or manual way.
Field | Description |
---|---|
Node Configuration | |
Port |
|
Include in LoadBalancer set (if exists) |
|
Add Node | |
Node |
|
Redis
Adds a new Redis replica node to join the cluster. If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH.
Field | Description |
---|---|
Node configuration | |
Port |
|
Redis sentinel port |
|
Install software |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Add Node | |
Node |
|
Elasticsearch
Adds a new Elasticsearch master, data, master-data or coordinator node to join the selected cluster. If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH.
Field | Description |
---|---|
Node configuration | |
Port |
|
Install software |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Add Node | |
Node |
|
Valkey
Adds a new Valkey replica node to join the cluster. If you specify a new hostname or IP address, make sure that the node is accessible from the ClusterControl node via passwordless SSH. See Passwordless SSH.
Field | Description |
---|---|
Node configuration | |
Port |
|
Install software |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Add Node | |
Node |
|
Add Replication Node
MySQL/MariaDB Replication/Galera Cluster
Add replication node requires at least one existing node already configured with binary logs with GTID enabled. This is also true for the Galera Cluster. The following must be true for the primaries:
- At least one primary among the Galera nodes.
- MySQL/MariaDB GTID must be enabled.
log_slave_updates
must be enabled.- Primary’s MySQL port is accessible by ClusterControl and replicas.
- To enable binary logs for Galera Cluster, go to ClusterControl → Nodes → choose the database server → Enable Binary Logging.
For the replica, you would need a separate host or VM, with or without MySQL installed. If you do not have MySQL installed, and choose ClusterControl to install MySQL on the replica host, ClusterControl will perform the necessary actions to prepare the replica. This includes configuring the root password, creating the replication user, configuring MySQL, starting the service, and starting the replication link. The MySQL or MariaDB packages are based on the chosen vendor, for example, if you are running a Percona XtraDB Cluster, ClusterControl will prepare the replica using Percona Server. Before the deployment, the following must be true for the replica node:
- The replica node must be accessible using passwordless SSH from the ClusterControl server.
- MySQL port (default 3306) and netcat port 9999 on the replica host are open for connections.
- The replica node must use the same operating system as the primary node/cluster.
Create a Replication Node
MySQLPercona ServerPercona XtraDB ClusterMariaDB (Server and Galera Cluster)Create a new asynchronous replication node. For a Galera Cluster, this is not a new Galera database node, instead, it is a read-only replica via MySQL/MariaDB replication. The replica can be set up through backup streaming from the primary to the replica, or restoration of existing full database backup.
Field | Description |
---|---|
Node configuration | |
Netcat port |
|
Port |
|
Install software |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Advanced settings | |
Rebuild from a backup |
|
Include in LoadBalancer set (if exists) |
|
Delay the replica node |
|
Semi-synchronous replication |
|
Version |
|
Add Node | |
Primary node |
|
Node |
|
Import a Replication Node
MySQLPercona ServerPercona XtraDB ClusterMariaDB (Server and Galera Cluster)Imports an existing replication node into ClusterControl. Use this feature if you have added a replica manually to your cluster and want it to be detected/managed by ClusterControl. ClusterControl will then detect the new database node as being part of the cluster and start to manage and monitor it as with the rest of the cluster nodes. This is useful if a replica node has been configured outside of ClusterControl e.g, through Puppet, Ansible, or manual way.
Field | Description |
---|---|
Node Configuration | |
Port |
|
Include in LoadBalancer set (if exists) |
|
Add Node | |
Node |
|
PostgreSQL/TimescaleDB
PostgreSQL replication replica requires at least one primary node. The following must be true for the primary:
- At least one primary under the same cluster ID.
- Only PostgreSQL 9.6 and later are supported.
- Primary’s PostgreSQL port is accessible by ClusterControl and replica hosts.
For replica, you would need a separate host or VM, with or without PostgreSQL installed. If you do not have a PostgreSQL installed, and choose ClusterControl to install the PostgreSQL on the host, ClusterControl will perform the necessary actions to prepare the replica, for example, create a replica user, configure PostgreSQL, start the server, and also start the replication. Before the deployment, you must perform the following actions:
- The replica node must be accessible using passwordless SSH from the ClusterControl server.
- The PostgreSQL port (default 5432) on the replica is open for connections for at least the ClusterControl server and the other members in the cluster.
To prepare the PostgreSQL configuration file for the replica, go to ClusterControl → Manage → Configurations → Template Configuration files. Later, specify this template file when adding a replica.
Create a Replication Node
PostgreSQLTimescaleDBThe replica will be set up by streaming a pg_basebackup backup from the primary node to the replica node. The primary node’s configuration will be altered to allow the replica node to join.
Field | Description |
---|---|
Node configuration | |
Port |
|
Use package default for datadir |
|
Install software |
|
Advanced settings | |
Include in LoadBalancer set (if exists) |
|
Instance name |
|
Semi-synchronous replication |
|
Add Node | |
Primary node |
|
Node |
|
Import a Replication Node
PostgreSQLTimescaleDBImports an existing replication node into ClusterControl. Use this feature if you have added a replica manually to your cluster and want it to be detected/managed by ClusterControl. ClusterControl will then detect the new database node as being part of the cluster and start to manage and monitor it as with the rest of the cluster nodes. This is useful if a replica node has been configured outside of ClusterControl e.g, through Puppet, Ansible, or manual way.
Field | Description |
---|---|
Node Configuration | |
Port |
|
Logfile path |
|
Include in LoadBalancer set (if exists) |
|
Add Node | |
Node |
|
Bootstrap Cluster
Percona XtraDB ClusterMariaDB (Galera Cluster)Bootstrap cluster refers to the process of bootstrapping or initializing a new Galera Cluster. When you’re setting up a new Galera Cluster or performing a full backup restoration, you typically need to bootstrap it, which involves designating one node as the initial primary component of the cluster. This bootstrap node serves as the starting point for the cluster, and other nodes will then join this initial node to form the cluster.
The following actions will be performed:
- The cluster will be initialized from the selected node.
- All nodes will be stopped unless they are already stopped.
- When the bootstrap command is successful, the selected node will be Synced. The rest of the nodes will be started as joiners, one node at a time.
Field | Description |
---|---|
Bootstrap node |
|
Graceful shutdown timeout (in seconds) |
|
Force stop the nodes (after shutdown time) |
|
Clear MySQL datadir on Joining nodes |
|
Audit Logging
MariaDB ReplicationMariaDB (Galera Cluster)PostgreSQLTimescaleDBOnly for MariaDB-based and PostgreSQL-based clusters. Enable audit logging on the cluster using policy-based monitoring and logging of connection and query activity for security and compliance purposes. It enables database administrators to track and monitor database operations, such as logins, queries, data modifications, and schema changes. This plugin records these activities in a structured format, making it easier to analyze and audit database usage. This feature will enable audit logging on all nodes in the cluster.
For MariaDB audit logging, the following fields are presented:
Field | Description |
---|---|
Log Path |
|
Rotation Size in MB |
|
Rotations |
|
Events |
|
Exclude Users |
|
For PostgreSQL/TimescaleDB audit logging, the following fields are presented:
Field | Description |
---|---|
Events |
|
Add Load Balancer
Manages deployment of load balancers-related software (HAProxy, ProxySQL, PgBouncer and MaxScale), virtual IP address (Keepalived), and Garbd. For the Galera Cluster, it is also possible to add the Galera arbitrator daemon (Garbd) through this interface.
ProxySQL
MySQL-based ClustersMariaDB-based ClustersAvailable for MySQL-based clusters. By default, ClusterControl deploys ProxySQL in read/write split mode – your read-only traffic will be sent to replicas while your writes will be sent to a writable primary by creating two host groups. In case of primary failure, ProxySQL will detect the new writable primary and route writes to it automatically without any user intervention. ProxySQL offers a wide range of features and capabilities that make it suitable for various scenarios, including improving the scalability, availability, performance, and security of MySQL database infrastructures.
Create ProxySQL Service
Deploy a new ProxySQL server. You can use an existing database server or another new host by specifying the hostname or IPv4 address. With two or more ProxySQL nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.
Field | Description |
---|---|
Where to install | |
Server Address |
|
Admin Port |
|
Listening Port |
|
Version |
|
ProxySQL Configuration | |
Disable Firewall |
|
Disable AppArmor/SELinux |
|
Import Configuration |
|
Use Native Clustering |
|
Configuration | |
Administration User |
|
Administration Password |
|
Monitor User |
|
Monitor Password |
|
Database user (Optional) | |
Existing User |
|
Create new User |
|
Server instances | |
Include |
|
Max replication lag |
|
Max connection |
|
Weight |
|
Are you using implicit transactions? |
|
After the ProxySQL installation finishes, the node will be listed under the Nodes page where you can manage and view the ProxySQL status, variables and settings.
Import ProxySQL
If you already have ProxySQL installed in your setup, you can easily import it into ClusterControl to benefit from monitoring and management of the instance.
Field | Description |
---|---|
ProxySQL location | |
Server Address |
|
Admin Port |
|
Listening Port |
|
ProxySQL Configuration | |
Import Configuration |
|
After the ProxySQL import operation finishes, the node will be listed under the Nodes page where you can manage and view the HAProxy connection status.
HAProxy
MySQL-based clustersMariaDB-based clustersPostgreSQL-based clustersTimescaleDB-based clustersInstalls and configures a HAProxy instance. ClusterControl will automatically install and configure HAProxy, install mysqlcheck
script (for MySQL health checks) on each of the database nodes as part of xinetd
service, and start the HAProxy service. If you set up read/write splitting for primary-replica replication, there will be two listening ports configured (one for read-write and another one for read-only connections).
This feature is idempotent, you can execute it as many times as you want and it will always reinstall everything as configured.
Create HAProxy Service
Deploy a new HAProxy server. You can use an existing database server or another new host by specifying the hostname or IPv4 address. With two or more HAProxy nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.
Field | Description |
---|---|
Where to install | |
Server Address |
|
Policy |
|
Listen Port (Read/Write) |
|
Install for read/write splitting (master-slave replication) |
|
Security configuration | |
Disable Firewall |
|
Disable AppArmor/SELinux |
|
Installation settings |
|
Overwrite existing /usr/local/sbin/mysqlchk on target |
|
Advanced settings | |
Stats Socket |
|
Admin Port |
|
Admin User |
|
Admin Password |
|
Backend Name |
|
Timeout Server (seconds) |
|
Timeout Client (seconds) |
|
Max Connections Frontend |
|
Max Connections Backend/per instance |
|
xinetd allow connections from |
|
Server instances | |
Include |
|
Advanced options → Role |
|
Advanced options → Connection Address |
|
After the HAProxy installation finishes, the node will be listed under the Nodes page where you can manage and view the HAProxy connection status.
Import HAProxy
Import an existing HAproxy instance. With two or more HAProxy nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.
Field | Description |
---|---|
Configuration | |
Server Address |
|
Port |
|
Advanced settings | |
Admin User |
|
Admin Password |
|
cmdline |
|
LB Name |
|
HAProxy config |
|
Stats socket |
|
After the HAProxy import operation finishes, the node will be listed under the Nodes page where you can manage and view the HAProxy connection status.
You will need an admin user/password set in the HAProxy configuration otherwise you will not see any HAProxy stats.
Keepalived
Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.
Keepalived requires two or more HAProxy, ProxySQL, or MariaDB MaxScale instances to provide virtual IP address failover. By default, the virtual IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will automatically fail over to ‘Keepalived 2’ accordingly.
To understand how ClusterControl configures Keepalived, see this blog post How ClusterControl Configures Virtual IP and What to Expect During Failover.
Deploy Keepalived
Deploy a new Keepalived instance. You need at least two or more HAProxy, ProxySQL, or MariaDB MaxScale instances before you can proceed with this deployment.
Field | Description |
---|---|
Load balancer type |
|
Keepalived |
|
Add Keepalived Instance |
|
Remove Keepalived Instance |
|
Virtual IP |
|
Network Interface |
|
After the Keepalived installation finishes, the node will be listed under the Nodes page where you can view the Keepalived status and configuration.
Import Keepalived
Import an existing Keepalived instance. You need at least two or more HAProxy, ProxySQL, or MariaDB MaxScale instances before you can proceed with this import.
Field | Description |
---|---|
Keepalived 1 |
|
Add Keepalived Instance |
|
Remove Keepalived Instance |
|
Virtual IP |
|
After the Keepalived import job finishes, the node will be listed under the Nodes page where you can view the Keepalived status and configuration.
MariaDB MaxScale
MySQL-based clustersMariaDB-based clustersMaxScale is an intelligent proxy that allows the forwarding of database statements to one or more database servers using complex rules, a semantic understanding of the database statements, and the roles of the various servers within the backend cluster of databases.
You can deploy or import the existing MaxScale node as a load balancer and query router for your Galera Cluster and MySQL/MariaDB replication. For new deployment using ClusterControl, by default, it will create two production services:
- RW – Implements read-write split access.
- RR – Implements round-robin access.
ClusterControl performs MariaDB MaxScale installation via direct package download without using the MariaDB repository. The package download URL is kept inside /usr/share/cmon/templates/packages.conf
on the ClusterControl server under the [maxscale]
section. Occasionally, the provided URL will be outdated as MariaDB releases a new minor version, and removes the older minor version for a specific MaxScale major version. If that is the case, a manual modification is required to update the download link in this file. The updated download URL is available on the MariaDB MaxScale website.
Deploy MaxScale
Deploy MariaDB MaxScale as MySQL/MariaDB load balancer. With two MaxScale nodes, you can then configure a virtual IP address service using Keepalived. See Keepalived.
Field | Description |
---|---|
Where to install | |
Server Address |
|
Configure MaxScale | |
Threads |
|
RR Port (Port for round-robin listener) |
|
RW Port (Port for read/write split listener) |
|
Disable Firewall |
|
Disable AppArmor/SELinux |
|
Configuration | |
MaxScale admin username |
|
MaxScale admin password |
|
MaxScale MySQL Username |
|
MaxScale MySQL Password |
|
Server instances | |
Include |
|
After the MariaDB MaxScale installation finishes, the node will be listed under the Nodes page where you can manage and run the web-based maxctrl commands.
Import MaxScale
If you already have MaxScale installed in your setup, you can easily import it into ClusterControl to benefit from health monitoring and access to MaxAdmin – MaxScale’s CLI from the same interface you use to manage the database nodes. The only requirement is to have passwordless SSH configured between the ClusterControl node and the host where MaxScale is running.
Field | Description |
---|---|
MaxScale Address |
|
After the MariaDB MaxScale import job finishes, the node will be listed under the Nodes page where you can manage and run the web-based maxctrl commands.
Garbd
Percona XtraDB ClusterMariaDB (Galera Cluster)Exclusive for Galera Cluster. Galera arbitrator daemon (garbd) can be installed to avoid network partitioning or split-brain scenarios.
Field | Description |
---|---|
Server Address |
|
CmdLine |
|
ClusterControl does not support and allow garbd deployment on a server where ClusterControl is also running and hosted. There is a tendency that the existing MySQL packages will be removed which is managed by the software packaging tools.
PgBouncer
PostgreSQL-based clustersTimescaleDB-based clustersPgBouncer is a lightweight connection pooler for PostgreSQL. It reduces PostgreSQL resource consumption (memory, backends, fork) and supports online restart or upgrade without dropping client connections. Using ClusterControl, you can manage PgBouncer on one or more nodes, manage multiple pools per node and support 3 pool modes:
- session (default): When a client connects, a server connection will be assigned to it for the whole duration the client stays connected. When the client disconnects, the server connection will be put back into the pool.
- transaction: A server connection is assigned to a client only during a transaction. When PgBouncer notices that the transaction is over, the server connection will be put back into the pool.
- statement: The server connection will be put back into the pool immediately after a query completes. Multi-statement transactions are disallowed in this mode as they would break.
Deploy PgBouncer
ClusterControl only supports deploying PgBouncer on the same host as the PostgreSQL host. When deploying a PgBouncer node, ClusterControl will deploy using the following default values:
- Command:
/usr/bin/pgbouncer /etc/pgbouncer/pgbouncer.ini
- Port: 6432
- Configuration file:
/etc/pgbouncer/pgbouncer.ini
- Logfile:
/var/log/pgbouncer/pgbouncer.log
- Auth file:
/etc/pgbouncer/userlist.txt
- Pool mode: session
Field | Description |
---|---|
Authentication | |
PgBouncer Admin User |
|
PgBouncer Admin Password |
|
Add nodes | |
PgBouncer Node |
|
Listen Port |
|
After the PgBouncer installation finishes, the node will be listed under the Nodes page where you can manage the connection pools.
Import PgBouncer
ClusterControl only supports importing PgBouncer on the same host as the PostgreSQL host.
Field | Description |
---|---|
Authentication | |
PgBouncer Admin User |
|
PgBouncer Admin Password |
|
Add nodes | |
PgBouncer Node |
|
Listen Port |
|
After the PgBouncer import operation finishes, the node will be listed under the Nodes page where you can manage the connection pools.
Configure WAL
PostgreSQLTimescaleDBThe WALs are the REDO logs in PostgreSQL. REDO logs contain all changes that were made in the database and they are used for replication, recovery, online backup, and point-in-time recovery (PITR). Any changes that have not been applied to the data pages can be redone from the REDO logs.
This step is not mandatory, but is extremely important for a robust replication setup, as it is necessary to avoid the main server recycling old WAL files that have not yet been applied to the replica. If this occurs we will need to recreate the replica from scratch. Enabling write-ahead logging (WAL) makes it possible to support online backup and point-in-time recovery in PostgreSQL.
Field | Description |
---|---|
Archive Mode |
|
Compressed WAL Archive |
|
Custom WAL Archive Directory |
|
Apply to |
|
Changing the current value leads to loss of collected continuous WAL archive and thus loss of time frame to do point-in-time recovery (PITR).
Cluster-Cluster Replication
Percona XtraDB ClusterMariaDB (Galera Cluster)PostgreSQLTimescaleDBThis feature allows you to create a new cluster that will be replicating from this cluster. One common use case is for disaster recovery by having a hot standby site/cluster that can take over when the main site/cluster has failed. Clusters can be rebuilt with an existing backup or by streaming from a primary on the source cluster.
For MySQL-based clusters, ClusterControl will configure asynchronous MySQL replication from a primary cluster to a replica cluster. For PostgreSQL-based clusters, ClusterControl will configure asynchronous streaming replication between a primary cluster to a replica cluster. For the Galera Cluster, the asynchronous replication can be optionally set up with uni-directional (primary → replica) or bi-directional (primary ↔ replica) replication.
There are two ways ClusterControl can create a replica cluster:
- Streaming data from the primary cluster – Stream the data from a primary using hot backup tools e.g., Percona Xtrabackup, and MariaDB Backup or pg_basebackup. You need to pick one node of the source cluster to replicate from.
- Stage cluster from a backup – Choose an existing full backup from the dropdown list. For MySQL or MariaDB, if none is listed, take a full backup of one of the database nodes in your cluster that have binary logging enabled.
Once the above options have been selected, the cluster deployment wizard will appear similar to deploying a new cluster. See Deploy Database Cluster.
A replica cluster will appear in the database cluster list after deployment finishes. You will notice the replica cluster entry has a green footer showing that it is a replica of another cluster. If it is a bi-directional replication, you would see a double-headed arrow (single-headed arrow for uni-directional replication) with the cluster name and ID that it is replicating from, indicating the cluster-cluster replication is now active. You may also see from the topology of the cluster information card (just roll over on the cluster name to see the cluster information card appear) and you should see the topology view similar to the example below:
We highly recommend users to enable cluster-wide read-only on the replica cluster. Disable read-only only when promoting the replica cluster as the new primary cluster.
Clone Cluster
Percona XtraDB ClusterMariaDB (Galera Cluster)This feature allows you to create, in one click, an exact copy of your Galera Cluster onto a new set of hosts. The most common use case for cloning a deployment is for setting up a staging deployment for further development and testing. Cloning is a ‘hot’ procedure and does not affect the operations of the source cluster.
A clone will be created for this cluster. The following procedure applies:
- Create a new cluster consisting of one node.
- Stage the new cluster with SST (it is now closed).
- Nodes will be added to the cloned cluster until the cloned cluster size is reached.
- Query Monitor settings and settings for Cluster Recovery and Node Recovery options are not cloned.
- The
my.cnf
file may not be identical on the cloned cluster.
Field | Description |
---|---|
Configuration | |
Cluster Name |
|
Repository |
|
Disable firewall |
|
Disable SELinux/AppArmor |
|
Add Host | |
Node |
|
Upgrades
Performs database and load-balancer-related package upgrades.
A database upgrade is a risky operation and ClusterControl does not support an upgrade rollback operation. Please perform this activity on a test/staging cluster first before proceeding with the actual upgrade. Upgrades should only be performed when there is as little traffic as possible on the cluster. Performing a full database backup (or taking a full VM snapshot) before an upgrade will increase the chance of recovery in case a rollback is needed. Taking a backup (before the upgrade) or restoring a backup (to rollback) is not part of this particular upgrade job.
ClusterControl will perform the software upgrade based on what is available in the package repository for the particular vendor. Having said that, if the database vendor repository is not up-to-date, or can not be updated to the latest repository (e.g, non-Internet environment, or outbound connections behind a pre-configured HTTP proxy), ClusterControl will not be able to perform the upgrade since there will be no new packages appear in the server’s repository list.
Minor Upgrade
MySQL-based clustersMariaDB-based clustersMongoDB Replica SetMongoDB Sharded ClusterPostgreSQL-based clustersTimescaleDB-based clustersPerforms minor software upgrades for database and load balancer software. Minor upgrade version formatting is notably different depending on the database type and vendor:
- MySQL/Percona Server for MySQL – 8.0.x to 8.0.y
- MariaDB – 10.11.x to 10.11.y
- MongoDB – 5.0.x to 5.0.y
- PostgreSQL/TimescaleDB/EntepriseDB – 15.x to 15.y
- ProxySQL – 2.6.x to 2.6.y
- PgBouncer – 1.22.x to 1.22.y
- HAProxy – 2.1.x to 2.1.y
- MariaDB MaxScale – 22.08.x to 22.08.y
For a primary-replica replication setup, ClusterControl will perform the upgrade starting from the replica/secondary nodes and will eventually perform the upgrade on the primary node of a cluster (secondary first, primary last). During the eventual primary node upgrade, expect a service disruption to the database cluster service until the primary node is online again after restarting. For a multi-primary setup, ClusterControl will upgrade any database node in a random order, one node at a time. If a node fails to be upgraded, the upgrade job is aborted and manual intervention is required to recover or reinstall the node. Due to this fact, it is important to have a proper maintenance window, which should only be performed when there is as little traffic as possible to the cluster.
Field | Description |
---|---|
Upgrade |
|
Check for upgrades |
|
Select nodes |
|
Major Upgrade
PostgreSQL-based clustersTimescaleDB-based clustersPerforms major software upgrades for the database software. Only one major version upgrade is supported at a time, for example, from PostgreSQL 13.x to 14.x. At the moment, only PostgreSQL-based clusters are supported. If you want to upgrade from PostgreSQL 12.x to 14.x, you have to upgrade stage-by-stage to PostgreSQL 13.x first, followed by PostgreSQL 14.x.
For a primary-replica replication setup, ClusterControl will perform the upgrade starting from the replica/secondary nodes and will eventually perform the upgrade on the primary node of a cluster (secondary first, primary last). During the eventual primary node upgrade, expect a service disruption to the database cluster service until the primary node is online again after restarting. If a node fails to be upgraded, the upgrade job is aborted and manual intervention is required to recover or reinstall the node. Due to this fact, it is important to have a proper maintenance window, which should only be performed when there is as little traffic as possible to the cluster.
Field | Description |
---|---|
Configuration | |
Vendor |
|
Current version |
|
Upgrade version |
|
Method |
|
A major upgrade is performed at your own risk |
|
Advanced | |
Temporary master port |
|
Temporary upgrade candidate port |
|