Requirements
Before setting up ClusterControl, it’s essential to ensure that your environment meets the necessary requirements for a smooth installation and optimal performance. ClusterControl is designed to work across a variety of infrastructure setups, from on-premise servers to cloud environments, and supports multiple database technologies.
This page outlines the hardware, software, network, and database-specific requirements needed to successfully deploy and run ClusterControl.
Hardware
The following table shows the recommended hardware specifications for a ClusterControl host:
Aspect | Minimum | Small (3-15 nodes) | Medium (15-50 nodes) | Large (50-150 nodes) |
---|---|---|---|---|
CPU architecture | x86_64 only | |||
CPU | 2 cores | 4 cores | 16 cores | 24 cores |
RAM | 2 GB | 8 GB | 16 GB | 32 GB |
Disk space | 20 GB | 50 GB | 150 GB | 300 GB |
Tip
For very large deployment, you probably need more than one ClusterControl instances to manage the whole fleet. Use ClusterControl Multi-Controller (Ops-C) to manage these ClusterControl instances. See Multi-Controller Installation.
Capacity planning
In regards to capacity planning, to simply put into perspective, we have seen real-world use-case where a single ClusterControl server in production with 24 CPUs, 32GB RAM, 150 GB disk space (currently 92% usage) to manage 67 clusters with around 130 nodes (database + load balancers).
We have also covered optimizing the performance of ClusterControl in this blog post, How to Optimize Performance of ClusterControl and Its Components.
CPU
The CPU capacity of the ClusterControl server plays a crucial role in handling various tasks such as monitoring, managing, and orchestrating database clusters. To accommodate a significant workload and ensure scalability for future growth, we recommend a minimum of 8 to 24 CPU cores with multi-threading capabilities.
RAM
Memory availability directly impacts the performance and responsiveness of ClusterControl, especially when handling a large number of database clusters and nodes. For optimal operation, we recommend 8 to 32 GB of RAM. However, considering the increasing complexity of database environments and potential future expansions, provisioning additional memory is advisable.
During the installation stage, the installer script tunes the necessary parameters for MySQL and Apache according to the amount of RAM in the host. If you have increased the RAM after the installation, it is recommended to revisit the tuning by looking at this blog post, How to Optimize Performance of ClusterControl and Its Components.
Disk space
Disk space requirements primarily depend on the size of the monitoring data, logs, backups, and other associated files generated by ClusterControl. Given the example use-case disk utilization of 92% for 150GB, it is recommended to allocate at least 300GB of disk space for the ClusterControl server. This ensures sufficient headroom for data growth and temporary storage needs.
If you use ClusterControl as the centralized backup repository, consider adding more disk space, adding another disk or mounting a networked file system like NFS, SMB or iSCSI to cater to this.
If you use Prometheus and agent-based monitoring, you may also use a dedicated Prometheus server (by default Prometheus will be installed on the ClusterControl server) to scale up the monitoring resources. Prometheus defaults to 15 days retention period and if you require longer retention period, adjust the disk allocation for /var/lib/prometheus
partition accordingly.
Network bandwidth
ClusterControl relies on network communication for monitoring, provisioning, and managing database clusters. Therefore, it's essential to ensure adequate network bandwidth to support the expected workload. A Gigabit Ethernet connection or higher is recommended for optimal performance. For inter-WAN communication, consider placing the ClusterControl server at the nearest of the most critical segment (production/primary).
Additional considerations
High availability
For mission-critical environments, implementing high availability (HA) for the ClusterControl server is highly recommended. This involves deploying redundant ClusterControl nodes in an active-passive or active-active configuration to ensure continuous operation and fault tolerance. See Standby ClusterControl Server for High Availability or High Availability ClusterControl (CMON HA). Note that CMON HA is only available in the Enterprise edition.
Storage performance
While sufficient disk space is essential, equally important is the performance of the storage subsystem. Utilizing fast and reliable storage technologies such as solid-state drives (SSDs) or high-performance RAID arrays can significantly improve the responsiveness of ClusterControl, especially during data-intensive operations like backups, restores, scaling and reporting.
Monitoring and maintenance
Regular monitoring of hardware resources, including CPU, RAM, disk space, and network utilization, is essential to identify potential bottlenecks or capacity issues proactively. Additionally, periodic maintenance tasks such as disk cleanup, log rotation, and performance tuning should be scheduled to optimize the ClusterControl server's performance and stability.
Operating system
ClusterControl
ClusterControl has been tested on the following operating systems:
- Red Hat Enterprise Linux 8.x/9.x
- Rocky Linux 8.x/9.x
- AlmaLinux 8.x/9.x
- Ubuntu 22.04/24.04 LTS
- Debian 10.x/11.x/12.x
- SUSE Linux Enterprise Server 15 SP3/15 SP4
Monitored nodes
For the monitored nodes, some database systems are limited to several operating systems, as shown below:
Database systems | Supported operating system |
---|---|
- MySQL/MariaDB (standalone & replication) - Galera Cluster (Percona XtraDB Cluster & MariaDB ) - PostgreSQL (streaming replication) - TimescaleDB (streaming replication) - MongoDB (replica set & sharded cluster) - MySQL NDB Cluster - ProxySQL - HAProxy - Keepalived - PgBouncer | - Red Hat Enterprise Linux 8.x/9.x - Rocky Linux 8.x/9.x - AlmaLinux 8.x/9.x - Ubuntu 18.04/20.04/22.04 LTS- Debian 10.x/11.x |
- Redis (Sentinel & cluster) - Valkey (Sentinel & cluster) - Elasticsearch | - Red Hat Enterprise Linux 8.x/9.x - Rocky Linux 8.x/9.x - AlmaLinux 8.x/9.x - Ubuntu 20.04/22.04/24.04 LTS- Debian 10.x/11.x |
- Microsoft SQL Server for Linux - Valkey 7 | - Red Hat Enterprise Linux 8.x - Rocky Linux 8.x - AlmaLinux 8.x - Ubuntu 20.04 LTS- Debian 10.x/11.x |
- MariaDB Server v10.x with Mariabackup - MariaDB Galera Cluster v10.x with Mariabackup- PostgreSQL v10+- MySQL Cluster (NDB)- HAProxy | - SUSE Linux Enterprise Server 15 SP4 |
Attention
- Mixing Red Hat-based and Debian-based nodes in a cluster is not supported.
- Redis does not support Debian 10.
Software dependencies
The following software and packages are required by ClusterControl:
- MySQL client/server (5.7 or later) or MariaDB client/server (10.0 or later)
- Apache web server (2.4 or later)
- OpenSSH server/client
- NTP server – All servers’ time must be synced under one time zone
socat
ornetcat
– for streaming backups
Note
If ClusterControl is installed via installation script (install-cc) or package manager (yum/dnf/apt/zypper), all dependencies will be automatically satisfied.
Firewall and security groups
It is important to secure the ClusterControl host and the database cluster. It is recommended for users to isolate the database infrastructure from the public Internet and just whitelist the known hosts or networks to reach the database cluster.
ClusterControl requires ports used by the following services to be accessible or enabled:
- ICMP (echo reply/request).
- SSH (default is 22).
- HTTPS (default is 443).
- MySQL (default is 3306).
- CMON RPC (default is 9500).
- CMON RPC TLS (default is 9501).
- CMON Events (default is 9510).
- CMON SSH (default is 9511).
- CMON Cloud (default is 9518).
- Streaming port for backups through socat/netcat (default is 9999, configurable under Backup Settings).
- CMON Proxy (default is 19501) - Only for ClusterControl Ops-C (multi-controller).
ClusterControl supports various database and application vendors and each has its own set of standard ports that need to be reachable. Following ports and services need to be reachable from ClusterControl node to the managed nodes (unless the target services are deliberately configured on non-default ports):
Node type | Default port (Service) |
---|---|
All managed nodes | - 22 (SSH) - ICMP (echo reply/request) - 9100 (Node exporter) - 9011 (Process exporter) |
MySQL/MariaDB (standalone and replication) |
- 3306 (MySQL) - 9104 (MySQL exporter) |
Galera Cluster (MariaDB Galera Cluster/Percona XtraDB Cluster) |
- 3306 (MySQL) - 4444 (SST) - 4567 TCP/UDP (Galera) - 4568 (Galera IST) - 9200 (HAProxy health check) - 9104 (MySQL exporter) |
MongoDB replica set | - 27017 (mongod) - 9126 (MongoDB exporter) |
MongoDB sharded cluster | - 27018 (mongod) - 27017 (mongos) - 27019 (config server) - 9216 (MongoDB exporter) |
PostgreSQL/TimescaleDB | - 5432 (PostgreSQL) - 9200 (HAProxy health check) - 9187 (PostgreSQL exporter) |
HAProxy | - 9600 (HAProxy stats) - 3307 (MySQL load-balanced) - 3308 (MySQL load-balanced read-only) - 5433 (PostgreSQL load-balanced) - 5434 (PostgreSQL load-balanced read-only) |
MariaDB MaxScale | - 6603 (MaxCtrl - CLI) - 4006 (Round robin listener) - 4008 (Read/Write split listener) - 4442 (Debug information) |
Keepalived | - IP protocol 112 (VRRP) |
Galera Arbitrator (garbd) | - 4567 (Galera) |
ProxySQL | - 6032 (ProxySQL Admin) - 6033 (MySQL load-balanced) |
Prometheus | - 9090 (Prometheus) |
Microsoft SQL Server | - 1433 (SQL Server) - 9999 (SQL Server exporter) |
Redis/Valkey (Sentinel) |
- 6379 (Redis) - 26379 (Sentinel) - 9121 (Redis exporter) |
Redis/Valkey (Cluster) |
- 6379 (Redis) - 16379 (Cluster bus) - 9121 (Redis exporter) |
Elasticsearch | - 9200 (Elastic HTTP/Transfer) - 9114 (Elasticsearch exporter) |
Operating system user
ClusterControl controller (cmon) process requires a dedicated operating system user to perform various management and monitoring commands on the managed nodes. The value of os_user
or sshuser
in CMON configuration file, must exist on all managed nodes and it should have the ability to perform super-user commands.
You are recommended to install ClusterControl as 'root', and running as root is the easiest option. If you perform the installation using another user other than 'root', the following must be true:
- The OS user must exist on all nodes
- The OS user must not be
mysql
- The
sudo
program must be installed on all hosts - The OS user must be allowed to do sudo, i.e, it must be in sudoers
-
The OS user must be configured with the proper
PATH
environment variable. The following environmentPATH
is expected for usermyuser
:
Attention
ClusterControl requires full access of sudo (all commands) for full functionality. Restricting the commands would cause some of the operations to fail (cluster recovery, failover, backup restoration, service control and cluster deployment).
For sudoers, using passwordless sudo is recommended (privilege escalation without password). To set up a passwordless sudo user, open /etc/sudoers
via text editor and add the following line at the end. Replace <os_user>
with the sudo username of your choice:
Open a new terminal to verify if it works. You should now be able to run the following command without entering a password:
Note
If using passwordless sudo is not an option, do not forget to specify the sudo password when deploying a new or importing an existing database cluster.
You can also verify this with the SSH command line used by CMON (assuming SSH key-based authentication has been set up correctly):
Where <os_user>
is the name of the user you intend to use by ClusterControl for management purposes, and <ip_address_or_hostname>
is the IP address or hostname of a node in your cluster.
SSH key-based authentication
Proper SSH key-based authentication setup from ClusterControl node to all managed nodes is mandatory. Before performing any operation on the managed node, the node must be accessible via SSH without using a password by using the key-based authentication instead.
ClusterControl uses libssh
(a multiplatform C library using SSHv2 protocol) which supports the following public-key algorithms:
- ssh-rsa
- rsa-sha2-512
- rsa-sha2-256
- ssh-dss
- ssh-ed25519
- ecdsa-sha2-nistp256
- ecdsa-sha2-nistp384
- ecdsa-sha2-nistp521
Note
Take note that ClusterControl is fully tested with the RSA public key. Other supported key types should work in most cases.
Setting up key-based authentication
To set up an SSH key-based authentication, make sure you generate SSH keys (private and public keys) and copy the public key from the ClusterControl host as the designated user to the target host.
Tip
It is not necessary to setup two-way key-based authentication SSH, e.g: from the managed database node to the ClusterControl node.
To generate an SSH key, use the ssh-keygen
command which is available with the OpenSSH-client package. On ClusterControl node:
The above command will generate SSH RSA private and public keys under the user's home directory, /root/.ssh/
. The private key, id_rsa
has to be kept secure on the node. The public key, id_rsa.pub
should be copied over to all nodes that want to be accessed by ClusterControl passwordlessly.
The next step is to copy the SSH public key to all nodes. You may use the ssh-copy-id
command to achieve this if the destination node supports password authentication:
Example
$ whoami
root
$ ls -1 ~/.ssh/id*
/root/.ssh/id_rsa
/root/.ssh/id_rsa.pub
$ ssh-copy-id [email protected] # specify the root password of 192.168.0.10 if prompted
The command ssh-copy-id
will simply copy the public key from the source server and add it into the destination server’s authorized key list, default to ~/.ssh/autohorized_keys
of the authenticated SSH user. If password authentication is disabled, and the target node requires a key-based authentication, you can use the -o
flag to customize the SSH option:
```bash
ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected]
```
Alternatively, you can manually copy the public key to the target nodes. On ClusterControl node, copy the content of SSH public key located at ~/.ssh/id_rsa.pub
and paste it into ~/.ssh/authorized_keys
of all target nodes.
Example
The following example shows how a root user on the ClusterControl host (192.168.0.10) generates and copies an SSH key to databases hosts (192.168.0.11, 192.168.0.12, 192.168.0.13):
$ whoami
root
$ ssh-keygen -t rsa # press Enter on all prompts
$ ls -1 ~/.ssh/id*
/root/.ssh/id_rsa
/root/.ssh/id_rsa.pub
$ ssh-copy-id 192.168.0.11 # specify the root password of 192.168.0.11 if prompted
$ ssh-copy-id 192.168.0.12 # specify the root password of 192.168.0.12 if prompted
$ ssh-copy-id 192.168.0.13 # specify the root password of 192.168.0.13 if prompted
If you are running as a sudo user e.g “sysadmin”, here is an example:
$ whoami
sysadmin
$ ssh-keygen -t rsa # press Enter on all prompts
$ ls -1 ~/.ssh/id*
/home/sysadmin/.ssh/id_rsa
/home/sysadmin/.ssh/id_rsa.pub
$ ssh-copy-id 192.168.0.11 # specify the sysadmin password of 192.168.0.11 if prompted
$ ssh-copy-id 192.168.0.12 # specify the sysadmin password of 192.168.0.12 if prompted
$ ssh-copy-id 192.168.0.13 # specify the sysadmin password of 192.168.0.13 if prompted
You should be able to SSH from ClusterControl to the other server(s) without a password:
ssh [email protected]
ssh [email protected]
ssh [email protected]
For cloud users, you can use the corresponding key pair generated by the cloud provider by uploading it onto the ClusterControl host and specify the physical path when configuring the SSH-related parameters in the ClusterControl UI (deploy a cluster, import nodes, etc). ClusterControl will then use this key to perform tasks that require SSH key-based authentication and store the path via ssh_identity
variable inside CMON configuration file:
If you use other public-key algorithms (ClusterControl defaults to RSA), make sure the public key generated on the ClusterControl node is copied and allowed on all managed nodes under ~/.ssh/autohorized_keys
. You can use ssh-copy-id
command (as shown in the examples above), or simply copy the public key to all managed nodes manually.
Sudo password
Sudo with or without a password is possible. If undefined, ClusterControl will escalate to sudoer without a password. When deploying a new or importing an existing cluster into ClusterControl, the user will be asked to specify the sudo password in the deployment dialog. The specified sudo password is then stored inside the CMON configuration file under sudo
variable:
Attention
Having 2>/dev/null
in the sudo command is compulsory to strip out stderr from the response.
Timezone
ClusterControl requires all servers’ time to be synchronized and to run within the same time zone. Verify this by using the following command:
To change time zone, e.g from UTC to Pacific time:
UTC is however recommended. Configure NTP client for each host with a working time server to avoid time drifting between hosts which could cause inaccurate reporting or incorrect graphs plotting. To immediately sync a server’s time with a time server, use the following command: