Create a Database Cluster
Choose from the list of database cluster technology, vendors, and versions to deploy a new database cluster. The following database cluster types are supported:
- MySQL/MariaDB (standalone or replication)
- Oracle MySQL
- Percona Server for MySQL
- MariaDB Server
- Galera Cluster
- Percona XtraDB Cluster
- MariaDB Cluster
- PostgreSQL (standalone or streaming replication)
- TimescaleDB (standalone or streaming replication)
- MongoDB (Replica Set or Sharded Cluster)
- MongoDB
- Percona Server for MongoDB
- Redis
- Microsoft SQL Server for Linux
- Elasticsearch
- Valkey Cluster
There are prerequisites that need to be fulfilled prior to the deployment:
- Passwordless SSH (SSH using key-based authentication) is configured from the ClusterControl node to all database nodes. See Passwordless SSH.
- Verify that sudo is working properly if you are using a non-root user. See Operating System User.
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl v2 → Activity Center → Jobs.
MySQL Replication
Deploys a new MySQL/MariaDB Replication or a standalone MySQL/MariaDB server. The database cluster will be automatically added into ClusterControl once deployed. A minimum of two nodes is required for MySQL/MariaDB replication. If only one database IP address or hostname is provided, ClusterControl will deploy it as a standalone MySQL/MariaDB server with binary log enabled.
The following vendors and versions are supported for a new deployment:
- Oracle MySQL – 8.0.
- Percona Server for MySQL – 8.0.
- MariaDB Server – 10.4, 10.5, 10.6, 10.11 and 11.4 LTS.
By default, ClusterControl deploys MySQL/MariaDB replication with the following configurations:
- MySQL GTID with
log_slave_updates
enabled (MySQL and Percona only).
- MariaDB GTID with
log_slave_updates
enabled (MariaDB only).
- All database nodes will be configured with
read_only=ON
and super_read_only=ON
(if supported). The chosen primary will be promoted by disabling the read-only in the runtime.
- ClusterControl will create and grant necessary privileges for two MySQL/MariaDB users –
cmon
for monitoring and management and backupuser
for backup and restore purposes.
- The generated account credentials are stored inside
secrets-backup.cnf
under the MySQL configuration directory.
If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Base Template Files for details.
It is possible to set up a primary-primary replication from scratch under the Add Nodes section. You can also add more replicas later after the deployment is completed.
Attention
ClusterControl sets read_only=ON
on all slaves but a privileged user (SUPER) can still write to a slave (except for MySQL versions that support super_read_only
).
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, ensure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- MySQL server port. The default is 3306.
|
Server data directory |
- Location of MySQL/MariaDB data directory. Default is
/var/lib/mysql .
|
Admin/Root user |
- Specify the MySQL root user. This user will be granted global SUPER permission and the GRANT option.
|
Admin/Root password |
- Specify the MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Configuration template |
- MySQL configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Semi-synchronous replication |
- On – ClusterControl will configure a semi-synchronous replication.
- Off – ClusterControl will configure an asynchronous replication.
|
Add Nodes
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- A primary node is mandatory. To set up a single instance, you can skip defining replica nodes and proceed with the deployment.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Use multi-primary replication |
- On – Adds a second set of a replication cluster for primary-primary replication between the first cluster and the secondary cluster. A circular replication will be set up by ClusterControl. The secondary cluster/node will be set with read-only.
- Off – Skips configuring a secondary cluster/node.
|
MySQL Galera
Deploys a new MySQL/MariaDB Galera Cluster. The database cluster will be automatically added into ClusterControl once deployed. A minimal setup is comprised of one Galera node (no high availability, but this can later be scaled with more nodes). However, a minimum of three nodes is recommended for high availability. Garbd (an arbitrator) can be added later after the deployment completes if needed.
The following vendors and versions are supported for a new deployment:
- Percona XtraDB Cluster – 8.0.
- MariaDB Cluster – 10.4, 10.5, 10.6, 10.11 and 11.4 LTS.
By default, ClusterControl deploys MySQL Galera with the following configurations:
- Use xtrabackup-v2 or mariabackup (depending on the vendor chosen) for
wsrep_sst_method
.
- Binary logging is enabled.
- ClusterControl will create and grant necessary privileges for two MySQL users –
cmon
for management, cmonexporter
for monitoring and backupuser
for backup and restore purposes.
- Generated account credentials are stored inside
/etc/mysql/secrets-backup.cnf
.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- MySQL server port. The default is 3306.
|
Server data directory |
- Location of MySQL/MariaDB data directory. Default is
/var/lib/mysql .
|
Admin/Root user |
- Specify the MySQL root user. This user will be granted global SUPER permission and the GRANT option.
|
Admin/Root password |
- Specify the MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Configuration template |
- MySQL configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Enable SSL encryption |
- On – Configures Galera Replication SSL encryption and SST encryption. This option is ignored by certain vendors. Percona XtraDB Cluster 8.0 enables SSL by default regardless of this setting.
- Off – Skips configuring SSL encryption.
|
Add Nodes
|
Galera nodes |
- IP address or hostname of the database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- A minimum of three nodes is recommended.
|
PostgreSQL
Deploys a new PostgreSQL standalone or streaming replication cluster from ClusterControl. A minimum of two nodes is required for PostgreSQL streaming replication. The following vendors and versions are supported for a new deployment:
- PostgreSQL – 12, 13, 14, 15, and 16.
By default, ClusterControl deploys PostgreSQL instances with the following configurations:
- Configure and load the
pg_stat_statements
module.
- The WAL level is set to
hot_standby
.
- ClusterControl will configure the PostgreSQL instance with SSL encryption for client-server connections.
- ClusterControl will create and grant necessary privileges for an additional PostgreSQL user –
cmon_replication
for PostgreSQL streaming replication, and cmonexporter
for monitoring.
ClusterControl now supports pgvector which can be enabled with our PostgreSQL deployment wizard through an additional extensions step. While we intend to add more extensions to PostgreSQL in future releases, pgvector is the only extension currently available for selection.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- PostgreSQL server port. The default is 5432.
|
Server data directory |
- Location of PostgreSQL data directory. Default is
/var/lib/pgsql/{version}/data .
|
User |
- Specify the PostgreSQL admin user.
|
Password |
- Specify the PostgreSQL admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Configuration template |
- PostgreSQL configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Add Nodes
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- A primary node is mandatory. To set up a single instance, you can skip defining replica nodes and proceed with the deployment.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Synchronous Replication |
- On – Configures synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with a considerable performance overhead.
- Off – Skips this configuration part and configures asynchronous streaming replication.
|
Extensions |
Pgvector |
- On – Installs pgvector extension. After the deployment job is finished, the user will need to enable the pgvector extension manually by connection to the PostgreSQL database and execute the following command:
CREATE EXTENSION vector;
- Off – Skips pgvector extension installation.
|
TimescaleDB
Deploys a new TimescaleDB standalone or streaming replication cluster from ClusterControl. A minimum of two nodes is required for TimescaleDB streaming replication. The following vendors and versions are supported for a new deployment:
- TimescaleDB – 12, 13,14 and 15.
By default, ClusterControl deploys TimescaleDB with the following configurations:
- Configure and load the
pg_stat_statements
and timescaledb
modules.
- The WAL level is set to
hot_standby
.
- ClusterControl will configure the TimescaleDB instance with SSL encryption for client-server connections.
- ClusterControl will create and grant necessary privileges for an additional TimescaleDB user –
cmon_replication
for streaming replication and cmonexporter
for monitoring.
Note
You may also deploy a PostgreSQL and convert it to TimescaleDB at a later stage. However, this action will be irreversible and ClusterControl will treat the cluster as TimescaleDB onwards.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If leave it blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- TimescaleDB server port. The default is 5432.
|
Server data directory |
- Location of TimescaleDB data directory. Default is
/var/lib/pgsql/{version}/data .
|
User |
- Specify the TimescaleDB admin user.
|
Password |
- Specify the TimescaleDB admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Configuration template |
- TimescaleDB configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Add Nodes
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- A primary node is mandatory. To set up a single instance, you can skip defining replica nodes and proceed with the deployment.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Synchronous Replication |
- On – Configures synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with a considerable performance overhead.
- Off – Skips this configuration part and configures asynchronous streaming replication.
|
MongoDB ReplicaSet
Deploys a new MongoDB Replica Set. The database cluster will be automatically added to ClusterControl once deployed. A minimum of three nodes (including mongo arbiter) is recommended. The following vendors and versions are supported for a new deployment:
- MongoDB – 4.4, 5.0, 6.0 and 7.0
- Percona – 4.4, 5.0, 6.0 and 7.0
By default, ClusterControl deploys MongoDB Replica Set members with the following configurations:
- Configure
setParameter.enableLocalhostAuthBypass: true
inside MongoDB configuration file.
- ClusterControl will create and grant necessary privileges for an additional MongoDB user –
admin.cmon_backup
for backup and restore purposes.
Attention
It is possible to deploy only 2 MongoDB nodes (without an arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server run as primary. Automatic failover works fine with 3 nodes and more.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- MongoDB server port. The default is 27017.
|
Server data directory |
- Location of MongoDB data directory. Default is
/var/lib/mongodb .
|
User |
- Specify the MongoDB admin user.
|
Password |
- Specify the MongoDB admin password.
|
ReplicaSet name |
- Specify the name of the replica set, similar to
replication.replSetName option in MongoDB.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Configuration template |
- MongoDB configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Add Nodes
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- A primary node is mandatory. To set up a single instance, you can skip defining replica nodes and proceed with the deployment.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- There are two additional options for replica nodes:
- Replication delay – Specify the amount of time the secondary member should be delayed in seconds.
- Priority – Set the secondary member priority. A bigger integer represents a higher priority. A delayed secondary member should be set to 0 because it is not fit to become primary and should be hidden from the application.
|
MongoDB Shards
Deploys a new MongoDB Sharded Cluster. The database cluster will be automatically added to ClusterControl once deployed. For production deployment, it is recommended to have at least 8 nodes for high availability setup:
- 2 nodes for the mongos (router),
- 3 nodes for the config server (replica set),
- 3 nodes per shard (replica set).
The following vendors and versions are supported for a new deployment:
- MongoDB Enterprise – 4.4, 5.0, 6.0 and 7.0
- MongoDB – 4.4, 5.0, 6.0 and 7.0
- Percona – 4.4, 5.0, 6.0 and 7.0
By default, ClusterControl deploys MongoDB Shards with the following configurations:
- Configure
setParameter.enableLocalhostAuthBypass: true
inside MongoDB configuration file.
- ClusterControl will create and grant necessary privileges for an additional MongoDB user –
admin.cmon_backup
for backup and restore purposes.
Attention
MongoDB Sharded Cluster does not support mongodump
backup method. Users will be asked to install Percona Backup for MongoDB when creating or scheduling a backup for this cluster type after the deployment completes.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- MongoDB server port for shards. The default is 27019.
|
Server data directory |
- Location of MongoDB data directory. Default is
/var/lib/mongodb .
|
User |
- Specify the MongoDB admin user.
|
Password |
- Specify the MongoDB admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Configuration template |
- MongoDB shards (replica set) configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Router configuration template |
- MongoDB router configuration (mongos) template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files for details.
|
Configuration Servers and Routers |
Add configuration server |
- IP address or hostname of the config servers. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- A minimum of 3 nodes is recommended.
|
Port |
- MongoDB server port for config server (config). The default is 27019.
|
Node router server |
- IP address or hostname of the router servers. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Port |
- MongoDB server port for router service (mongos). The default is 27017.
|
Shard
|
ReplicaSet name |
- Specify the name of the replica set, similar to
replication.replSetName option in MongoDB.
|
Port |
- MongoDB server port for shards. The default is 27018.
|
Add nodes to the shard |
- IP address or hostname of the primary node for the shard. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Replica nodes |
- IP address or hostname of the replica nodes for the shard. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- There are two additional options for replica nodes:
- Replication delay – Specify the amount of time the secondary member should be delayed in seconds.
- Priority – Set the secondary member priority. A bigger integer represents a higher priority. A delayed secondary member should be set to 0 because it is not fit to become primary and should be hidden from the application.
|
Add another shard |
- Create another shard. You can then specify the IP address or hostname of the MongoDB server that falls under this new shard.
|
Redis Sentinel
Deploys new Redis instances with Redis Sentinel. A minimum of 3 nodes is recommended for high availability and automatic failover. The following vendors and versions are supported for a new deployment:
- Redis Sentinel – v6 and v7
By default, ClusterControl deploys Redis instances with the following configurations:
- ClusterControl will configure the Redis instance with
appendonly
enabled.
- ClusterControl will secure the instance with AUTH enabled and configure the
requirepass
and masterauth
options.
- The configuration
maxMemory
(70% of node’s RAM, rounded to the nearest power of 2) and maxMemoryPolicy=allkeys-lru
will be set.
Tips
Redis Sentinel requires 3 nodes for automatic primary promotion. Sentinel can be co-located on the ClusterControl server if you want to deploy a two-node Redis replication cluster (Sentinel will be co-located on each Redis instance).
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- Redis server port. The default is 6379.
|
Server data directory |
- Location of Redis data directory. Default is
/var/lib/redis .
|
Password |
- Specify the Redis admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Add Nodes
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Redis Cluster
Deploy a new Redis Cluster. A minimum of 3 nodes is recommended for high availability and automatic failover. For production deployment, it is recommended to have at least 6 nodes for high availability setup:
- 2 nodes (1 primary + 1 replica) for shard 1
- 2 nodes (1 primary + 1 replica) for shard 2
- 2 nodes (1 primary + 1 replica) for shard 3
The following vendors and versions are supported for a new deployment:
- Redis Cluster – v6 and v7
By default, ClusterControl deploys Redis cluster instances with the following configurations:
- ClusterControl will configure the Redis instance with
appendonly
enabled.
- ClusterControl will secure the instance with AUTH enabled and configure the
requirepass
and masterauth
options.
- ClusterControl will enable TLS encryption. To access the cluster, one must use the
--tls
flag to connect.
- The configuration
maxMemory
(70% of node’s RAM, rounded to the nearest power of 2) and maxMemoryPolicy=allkeys-lru
will be set.
Tips
Redis Cluster requires at least 3 nodes for a standard cluster setup with 3 shards. However, each shard should have its replica, hence 6 nodes are recommended (1 primary + 1 replica per shard).
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Redis port |
- Redis server port. The default is 6379.
|
Cluster bus port |
- Redis cluster bus port. The default is 16379.
|
Node timeout (ms) |
- The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in the Redis Cluster. Notably, every node that can’t reach the majority of master nodes for the specified amount of time, will stop accepting queries.
|
Replica validity factor |
- Specify the replication validity factor to consider a replica that is disconnected from the primary for more than Node timeout multiplied by this value.
- Set the factor to 0 to always consider a replica valid to failover. If the value is positive, a maximum disconnection time is calculated as the node timeout value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time.
|
Configuration template |
- Redis configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files.
|
Username |
- Specify an admin user for Redis that will be created by ClusterControl.
|
Password |
- Specify the Redis admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Shard #
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If no replica, leave it blank and proceed with adding a shard.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Add another shard |
- A minimum of 3 shards is required. A similar Shard section will be presented where you can add the primary and replica nodes.
|
SQL Server
Deploys a new Microsoft SQL Server AlwaysOn high availability setup. A minimum of 3 nodes is required. The following vendors and versions are supported for a new deployment:
- Microsoft SQL Server for Linux – 2019 and 2022
By default, ClusterControl deploys MySQL Galera with the following configurations:
- Enforces the SQL Server user’s password policy as shown here.
- At the moment, ClusterControl only deploys AlwaysOn with asynchronous-commit mode, where it does not wait for any secondary replica to write incoming transaction log records to disk.
Attention
Only hostname or FQDN is supported. Therefore, proper host naming and mapping must be performed beforehand. You may use /etc/hosts
or DNS mapping to achieve this. When adding a database node in ClusterControl, entering an IP address will produce an error.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Server port |
- SQL Server port. The default is 1433.
|
Admin username |
- Specify the SQL Server admin username.
|
Admin password |
- Specify the SQL Server admin password. It is highly recommended that you use the generated password as it meets the minimum requirements.
- Click on the eye icon to view the unmasked password. Copy this value to somewhere place, or you may generate a new one.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Add Nodes
|
Primary node |
- Hostname or FQDN of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Replica nodes |
- Hostname or FQDN of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- Up to 8 replica nodes are supported.
|
Elasticsearch
Deploys a new Elasticsearch standalone or clustered setup. For a clustered Elasticsearch setup, 3 nodes are required with 3 eligible masters and 2 data nodes (co-located with the masters). The following vendor and versions are supported for a new deployment:
- Elastic – 7.17, 8.0, 8.1 and 8.3
By default, ClusterControl deploys Elasticsearch with the following configurations:
- For clustered setup, ClusterControl will configure an NFS server on one of the Elasticsearch nodes and mount the shared filesystem on all data nodes. This is for snapshot backup and restoration.
- For standalone setup, ClusterControl will create a local path for snapshot backup and restoration.
Attention
The minimum memory requirement for an Elasticsearch master node, data node, or master-data node is 1576 MB. ClusterControl will abort the deployment job if this requirement is not met. See Hardware Requirement for Elasticsearch.
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
HTTP port |
- Elasticsearch HTTP port. The default is 9200.
|
Transfer port |
- Elasticsearch transfer port. The default is 9200.
|
Admin username |
- Specify the Elasticsearch admin username.
|
Admin password |
- Specify the Elasticsearch admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Add Nodes
|
Eligible master |
- IP address, hostname, or FQDN of the eligible master node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via passwordless SSH.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- There is an additional option for the master node:
- Use as data node – Toggle ON to configure the master as a data node. The role will be master-data. If OFF, the node will be configured as master only. You may add more data nodes under the Data nodes section.
|
Data nodes |
- IP address, hostname, or FQDN of the data node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Snapshot Storage Configuration |
Repository name |
- Specify the snapshot storage repository name.
|
Storage host |
- Host with physical file system storing snapshots and sharing with other cluster’s nodes.
|
Default storage location |
- Location of the shared filesystem used to store and retrieve snapshots. This location will be registered in the
path.repo setting on all master and data nodes in the cluster.
|
Configure shared filesystem |
- On – ClusterControl will configure the NFS shared filesystem on the chosen Storage host and Default storage location as the default cluster’s snapshot repository.
- Off – This configuration part will be skipped. Users are responsible for configuring the NFS shared filesystem on the chosen Storage host and Default storage location.
|
Valkey Cluster
Deploy a new Valkey Cluster. A minimum of 3 nodes is recommended for high availability and automatic failover. For production deployment, it is recommended to have at least 6 nodes for high availability setup:
- 2 nodes (1 primary + 1 replica) for shard 1
- 2 nodes (1 primary + 1 replica) for shard 2
- 2 nodes (1 primary + 1 replica) for shard 3
The following vendors and versions are supported for a new deployment:
By default, ClusterControl deploys Valkey cluster instances with the following configurations:
- ClusterControl will configure the Redis instance with
appendonly
enabled.
- ClusterControl will secure the instance with AUTH enabled and configure the
requirepass
and masterauth
options.
- ClusterControl will enable TLS encryption. To access the cluster, one must use the
--tls
flag to connect.
- The configuration
maxMemory
(70% of node’s RAM, rounded to the nearest power of 2) and maxMemoryPolicy=allkeys-lru
will be set.
Tips
Valkey Cluster requires at least 3 nodes for a standard cluster setup with 3 shards. However, each shard should have its replica, hence 6 nodes are recommended (1 primary + 1 replica per shard).
Field |
Description |
Cluster Details |
Cluster name |
- Specify a name for the cluster.
- If blank, ClusterControl will generate a name for the cluster.
|
Tags |
- Specify tags for the cluster. Press Enter for every new tag defined.
- Tags are useful to group your database clusters and simplify lookup and filtering.
|
SSH Configuration |
SSH user |
- Specify “root” if you have root credentials.
- If you use
sudo to execute privileged system commands, specify the name that you wish to use here. The user must exist on all nodes. See Operating System User.
|
SSH user key path |
- Specify the full path of the SSH key (the key must exist on the ClusterControl node) that will be used by the SSH user to perform passwordless SSH. See Passwordless SSH.
|
SSH sudo password |
- If you use
sudo with a password, specify it here.
- Ignore this if the SSH user is root or the sudoer does not need a sudo password.
|
SSH port |
- Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
|
Install software |
- On – Installs database server packages via the package manager.
- Off – The installation part will be skipped. This is useful if you use a custom repository or you have pre-installed the server with a specific database version or vendor.
|
Disable firewall |
- On – Disables firewall (recommended).
- Off – This configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports.
|
Disable SELinux/AppArmor |
- On – Disables SELinux or AppArmor (recommended).
- Off – This configuration task will be skipped. If enabled, make sure you have set a proper policy for the database-related processes and all of their dependencies.
|
Node Configuration |
Valkey port |
- Valkey server port. The default is 6379.
|
Cluster bus port |
- Valkey cluster bus port. The default is 16379.
|
Node timeout (ms) |
- The maximum amount of time a Valkey Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in the Valkey Cluster. Notably, every node that can’t reach the majority of master nodes for the specified amount of time, will stop accepting queries.
|
Replica validity factor |
- Specify the replication validity factor to consider a replica that is disconnected from the primary for more than Node timeout multiplied by this value.
- Set the factor to 0 to always consider a replica valid to failover. If the value is positive, a maximum disconnection time is calculated as the node timeout value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time.
|
Configuration template |
- Valkey configuration template file under
/etc/cmon/templates or /usr/share/cmon/templates . See Base Template Files.
|
Username |
- Specify an admin user for Valkey that will be created by ClusterControl.
|
Password |
- Specify the Valkey admin password.
|
Repository |
- Use Vendor Repositories – Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided.
- Do Not Setup Vendor Repositories – Provision software by using repositories already set up on the nodes. The user has to set up the software repository manually on every database node as well as all future database/load balancer nodes in this cluster. Commonly, this is the best option if the database nodes are running without internet connections.
|
Shard #
|
Primary node |
- IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Replica nodes |
- IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If no replica, leave it blank and proceed with adding a shard.
- If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
|
Add another shard |
- A minimum of 3 shards is required. A similar Shard section will be presented where you can add the primary and replica nodes.
|
Base Template Files
All services configured by ClusterControl use a base configuration template available under /usr/share/cmon/templates
on the ClusterControl node. You can directly modify the file to suit your deployment policy however, this directory will be replaced after a package upgrade.
To make sure your custom configuration template files persist across upgrades, store your template files under the /etc/cmon/templates
directory (ClusterControl 1.6.2 and later). When ClusterControl loads up the template file for deployment, files under /etc/cmon/templates
will always have higher priority over the files under /usr/share/cmon/templates
. If two files having identical names exist on both directories, the one located under /etc/cmon/templates
will be used.
Here is an example of how one would create a custom configuration template file:
# on the ClusterControl node, as privileged user
mkdir -p /etc/cmon/templates
cp /usr/share/cmon/templates/my.cnf.repl80 /etc/cmon/templates/my.cnf.repl80-custom
vi /etc/cmon/templates/my.cnf.repl80-custom # make your changes and save
Go to the ClusterControl and start deploying a database cluster. Choose the created configuration file above from the Configuration template dropdown.
Dynamic Variables
Inside the provided template files, many configuration variables are configurable dynamically by ClusterControl during deployment. These variables are represented with a capital letter enclosed by @
, for example, @DATADIR@
. The following shows the list of variables supported by the ClusterControl, grouped by the cluster type:
Variable |
Description |
MySQL/MariaDB/Percona Server |
@BASEDIR@ |
The default is /usr . Value specified during cluster deployment takes precedence. |
@DATADIR@ |
The default is /var/lib/mysql . Value specified during cluster deployment takes precedence. |
@MYSQL_PORT@ |
The default is 3306. Value specified during cluster deployment takes precedence. |
@BUFFER_POOL_SIZE@ |
Automatically configured based on the host’s RAM. |
@LOG_FILE_SIZE@ |
Automatically configured based on the host’s RAM. |
@LOG_BUFFER_SIZE@ |
Automatically configured based on the host’s RAM. |
@BUFFER_POOL_INSTANCES@ |
Automatically configured based on the host’s CPU. |
@SERVER_ID@ |
Automatically generated based on member’s server-id . |
@SKIP_NAME_RESOLVE@ |
Automatically configured based on MySQL variables. |
@MAX_CONNECTIONS@ |
Automatically configured based on the host’s RAM. |
@ENABLE_PERF_SCHEMA@ |
The default is disabled. Value specified during cluster deployment takes precedence. |
@WSREP_PROVIDER@ |
Automatically configured based on the Galera vendor. |
@HOST@ |
Automatically configured based on hostname/IP address. |
@GCACHE_SIZE@ |
Automatically configured based on disk space. |
@SEGMENTID@ |
The default is 0. Value specified during cluster deployment takes precedence. |
@WSREP_CLUSTER_ADDRESS@ |
Automatically configured based on members in the cluster. |
@WSREP_SST_METHOD@ |
Automatically configured based on the Galera vendor. |
@BACKUP_USER@ |
Default is backupuser . |
@BACKUP_PASSWORD@ |
Automatically generated and configured for backupuser . |
@GARBD_OPTIONS@ |
Automatically configured based on garbd options. |
@READ_ONLY@ |
Automatically configured based on replication role. |
@SEMISYNC@ |
The default is disabled. Value specified during cluster deployment takes precedence. |
@NDB_CONNECTION_POOL@ |
Automatically configured based on the host’s CPU. |
@NDB_CONNECTSTRING@ |
Automatically configured based on members in the MySQL cluster. |
@LOCAL_ADDRESS@ |
Automatically configured based on the host’s address. |
@GROUP_NAME@ |
Default is grouprepl . Value specified during cluster deployment takes precedence. |
@PEERS@ |
Automatically configured based on members in the Group Replication cluster. |
MongoDB |
@DATADIR@ |
The default is /var/lib/mongodb . Value specified during cluster deployment takes precedence. |
@MONGODB_PORT@ |
The default is 27017, 27018, 27019 (depending on the cluster type). Value specified during cluster deployment takes precedence. |
@LOGDIR@ |
Automatically configured based on vendor. |
@HOST@ |
Automatically configured based on hostname/IP address. |
@SMALLFILES@ |
Automatically configured based on disk space. |
@PIDFILEPATH@ |
Automatically configured based on MongoDB data directory. |
@REPLICASET_NAME@ |
The default is my_mongodb_N . Value specified during cluster deployment takes precedence. |
Redis Cluster/Redis Sentinel/Valkey Cluster |
@BIND_ADDRESS@ |
Automatically configured based on the host’s address. |
@PORT@ |
The default is 6379. Value specified during cluster deployment takes precedence. |
@DATADIR@ |
The default is /var/lib/redis . Value specified during cluster deployment takes precedence. |
@REPLICATION_PASSWORD@ |
Automatically configured based on the value specified during the deployment. |
@REPLICA_OF@ |
Automatically configured based on members in the topology. |
@CLUSTER_CONFIG_FILE@ |
Automatically configured based on cluster type. |
@CLUSTER_NODE_TIMEOUT@ |
The default is 15000 in milliseconds. Value specified during cluster deployment takes precedence. |
@CLUSTER_BUS_PORT@ |
The default is 16379. Value specified during cluster deployment takes precedence. |
@CLUSTER_REPLICA_VALIDITY_FACTOR@ |
The default is 10. Value specified during cluster deployment takes precedence. |
@ACL_DB_ADMIN_USER@ |
Automatically configured based on members in the topology. |
@SENTINEL_PASSWORD@ |
Automatically configured based on the value specified during the deployment. |
@PASSWORD@ |
Automatically configured based on the value specified during the deployment. |
@MAXMEMORY_BYTES@ |
Automatically configured based on the host’s RAM. |
@MAXMEMORY_POLICY@ |
The default is allkeys-lru. |
@CLUSTER_ENABLED@ |
Automatically configured based on cluster type. |
Elasticsearch |
@CLUSTER_NAME@ |
Automatically configured based on the value specified during the deployment. |
@NODE_NAME@ |
Automatically configured based on hostname/IP address. |
@NODE_ROLES@ |
Automatically configured based on node’s role. |
@DATADIR@ |
The default is /var/lib/elasticsearch . Value specified during cluster deployment takes precedence. |
@SNAPSHOTS_LOCATION@ |
Automatically configured based on the value specified during the deployment. |
@BIND_ADDRESS@ |
Automatically configured based on hostname/IP address. |
@PUBLISHED_ADDRESS@ |
Automatically configured based on hostname/IP address. |
@HTTP_PORT@ |
The default is 9200. Value specified during cluster deployment takes precedence. |
@DISCOVERY_SEED_HOSTS@ |
Automatically configured based on hostname/IP address of the members in the topology. |
@CLUSTER_INITIAL_MASTER_NODES@ |
Automatically configured based on members in the topology. |
@SECURITY_ENABLED@ |
Automatically configured based on the value specified during the deployment. |