Import Database Cluster
The Import Database Cluster feature in ClusterControl enables users to bring existing database clusters under management with minimal effort.
With Import Database Cluster, you can:
- Easily connect single-node or multi-node database clusters to ClusterControl.
- Automatically detect and map existing topologies and replication setups.
- Apply custom configuration templates for unified management.
- Enable encryption and other security features for enhanced protection.
This feature streamlines the onboarding process, allowing database administrators to gain full visibility and control over existing infrastructure without the need for re-deployment. Whether you’re consolidating environments or centralizing management, ClusterControl ensures consistency, security, and operational best practices.
Support Matrix
The following database cluster types, vendors and topology are supported:
Database | Vendor | Topology |
---|---|---|
MySQL | Percona, Oracle | Standalone, replication |
MariaDB | MariaDB | Standalone, replication |
Galera Cluster | MariaDB, Percona | Galera certification-based replication |
PostgreSQL | PostgreSQL, EnterpriseDB | Standalone, streaming replication, logical replication |
TimescaleDB | TimescaleDB | Standalone, streaming replication |
MongoDB | MongoDB, Percona, MongoDB Enterprise | Replica set, sharded cluster |
Redis | Redis, Valkey | Sentinel, cluster |
Microsoft SQL Server for Linux | Microsoft | Standalone, Always On availability group |
Elasticsearch | Elastic | Single-node cluster, high availability cluster |
Prerequisites
There are prerequisites that need to be fulfilled prior to the deployment for all database clusters:
- Make sure the target database nodes are running on a supported architecture platform and operating system. See Hardware and Operating System.
- Passwordless SSH (SSH using key-based authentication) is configured from the ClusterControl node to all database nodes. See SSH Key-based Authentication.
- Verify that sudo is working properly if you are using a non-root user. See Operating System User.
- The target cluster must be in healthy state and not in a degraded state. For example, if you have a three-node Galera cluster, all nodes must be alive, accessible, and in sync.
Note
For more details, refer to the Requirements section. Each time you import an existing cluster or server, ClusterControl will trigger a job under ClusterControl GUI → Activity Center → Jobs. You can see the progress and status on this page. Click on … of the importing job and click Details. A window will also appear with messages showing the progress.
MySQL Replication
ClusterControl can manage and monitor existing MySQL and MariaDB servers, whether standalone or in a replication setup. A minimum of two nodes is required for MySQL/MariaDB replication. If only one database IP address or hostname is provided, ClusterControl will deploy it as a standalone MySQL/MariaDB server with binary log enabled.
When adding individual hosts, listing them together will place them in the same server group. For this to work, ClusterControl expects all instances within a group to use the same MySQL root password. It will also automatically try to identify each server's role (primary, replica, multi, or standalone).
When importing an existing MySQL Replication, ClusterControl will do the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (primary, replica, multi, standalone).
- Pull the configuration files.
- Generate the authentication key and register the nodes into ClusterControl.
Default configuration
By default, ClusterControl deploys MySQL/MariaDB replication with the following configurations:
- MySQL GTID with
log_slave_updates
enabled (MySQL and Percona only). - MariaDB GTID with
log_slave_updates
enabled (MariaDB only). - All database nodes will be configured with
read_only=ON
andsuper_read_only=ON
(if supported) for protection against accidental write. The chosen primary will be promoted by disabling the read-only in the runtime. - The generated account credentials are stored inside
secrets-backup.cnf
under the MySQL configuration directory. -
ClusterControl will create and grant necessary privileges for the following database users:
Database user Purpose cmon
Management and automation. cmonexporter
Prometheus exporter for database monitoring. cmonagent
ClusterControl query monitoring agent. cmon_replication
MySQL/MariaDB replication. backupuser
Backup and restore management.
If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.
Attention
ClusterControl sets read_only=ON
on all slaves but a privileged user (SUPER) can still write to a slave (except for MySQL versions that support super_read_only
).
Deployment steps
-
q an existing MySQL replication cluster, go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "MySQL Replication".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- Server data directory: The database server data directory path that ClusterControl shall configure on all database nodes. If the path does not exist, ClusterControl will create and configure it automatically.
- Admin/Root user: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPER privilege and the GRANT option for localhost only.
- Admin/Root password: The password for Admin/Root user.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
- information_schema queries: By default, this is set to off. When enabled, this shall allow cmon or ClusterControl to query from information_schema schema to collect metrics from your MySQL database nodes.
- Cluster auto-recovery: By default, this is set to off. Once enbled, the cluster auto-recovery will be enabled allowing for degraded or failing cluster will be attempted to be recovered by ClusterControl.
- Node auto-recovery: By default, this is set to off. Once enbled, the node auto-recovery will be enabled allowing for failing nodes such as accidentally terminated system process will be tried to be started to get back to the cluster.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you can specify your existing target database nodes and you are allowed to add more than one node depending on your existing MySQL Replication cluster you need to register to ClusterControl. Since this is an existing database cluster which means it is already setup, ClusterControl will be able to identify and scan the type of topology for your database cluster.
-
Node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
-
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
Import a three-node Oracle MySQL Replication 8.0 cluster, with operating system user "ubuntu". Regardless of the order, ClusterControl will be able to identify the primary node(s) through
read_only
andsuper_read_only
variables:s9s cluster --register \ --cluster-type=mysqlreplication \ --nodes="10.10.10.11;10.10.10.12;10.10.10.13" \ --vendor=oracle \ --provider-version=8.0 \ --db-admin-passwd='mYpa$$word' \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_rsa \ --cluster-name='PROD - MySQL Replication 8.0' \ --wait \ --log
Recommended Next Steps
MySQL Galera
ClusterControl can manage and monitor existing Galera Clusters, assuming all instances in the group use the same MySQL root password. Adding asynchronous slaves and adding load balancers connected to the Galera Cluster should be specified and imported separately after the cluster is imported into ClusterControl.
When importing an existing Galera Cluster, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Detect the remaining nodes in the cluster, only if Automatic node discovery is ON.
- Pull the configuration files.
- Generate the authentication key and register the nodes into ClusterControl.
A minimal setup is comprised of one Galera node (no high availability, but this can later be scaled with more nodes). However, a minimum of 3 nodes is recommended for high availability. Garbd (an arbitrator) can be added later after the deployment completes.
The following vendors and versions are supported for importing your existing cluster:
- Percona XtraDB Cluster - 8.0 and 8.4.
- Percona XtraDB Cluster Pro - 8.4.
- MariaDB Cluster - 10.4, 10.5, 10.6, 10.11 (LTS) and 11.4 (LTS).
Default configuration
By default, ClusterControl deploys Galera Cluster with the following configurations:
- Use
xtrabackup-v2
ormariabackup
(depending on the vendor chosen) forwsrep_sst_method
. - Binary logging is enabled.
- The generated account credentials are stored inside
secrets-backup.cnf
under the MySQL configuration directory. -
ClusterControl will create and grant necessary privileges for the following database users:
Database user Purpose cmon
Management and automation. cmonexporter
Prometheus exporter for database monitoring. cmonagent
ClusterControl query monitoring agent. cmon_replication
MySQL/MariaDB replication. backupuser
Galera SST, backup and restore management.
If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.
Deployment steps
-
To import an existing MySQL Galera Cluster, go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "MySQL Galera".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- Server data directory: The database server data directory path that ClusterControl shall configure on all database nodes. If the path does not exist, ClusterControl will create and configure it automatically.
- Admin/Root user: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPER privilege and the GRANT option for localhost only.
- Admin/Root password: The password for Admin/Root user.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
- information_schema queries: By default, this is set to off. When enabled, this shall allow cmon or ClusterControl to query from information_schema schema to collect metrics from your MySQL database nodes.
- Cluster auto-recovery: By default, this is set to off. Once enbled, the cluster auto-recovery will be enabled allowing for degraded or failing cluster will be attempted to be recovered by ClusterControl.
- Node auto-recovery: By default, this is set to off. Once enbled, the node auto-recovery will be enabled allowing for failing nodes such as accidentally terminated system process will be tried to be started to get back to the cluster.
- Automatic node discovery: By default, this is set to on. Set this to off if
wsrep_node_incoming_address=AUTO
for all of your primary Galera nodes. If your ClusterControl controller node uses different IPs/hostnames to connect to Galera Cluster nodes for inter-node connectivity, then set it to off (or disable) Automatic Node Discovery and pass IPs/hostnames that ClusterControl can use to connect to Galera nodes.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you can specify your existing target database nodes and you are allowed to add more than one node depending on your existing MySQL Galera Cluster you need to register to ClusterControl. Since this is an existing database cluster which means it is already setup, ClusterControl will be able to identify and scan the type of topology for your database cluster.
-
Node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
-
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
Import a three-node Percona XtraDB Cluster 8.0, with operating system user "ubuntu":
-
Create a three-node MariaDB Cluster 11.4 and let the deployment job run in the foreground:
Recommended Next Steps
PostgreSQL Streaming
ClusterControl can manage and monitor existing PostgreSQL or TimeScaleDB clusters running streaming Replication version 12 or later. When importing hosts, those listed together will form a server group in the user interface. It is expected that all instances within the same group utilize the same database administrator password.
A minimum of two nodes is required for PostgreSQL streaming replication. If only one database IP address or hostname is provided, ClusterControl will deploy it as a standalone PostgreSQL server. The following vendors and versions are supported for a importing your existing cluster:
- PostgreSQL from postgresql.org repository - 12, 13, 14, 15, 16 and 17.
- PostgreSQL from EDB repository - 12, 13, 14, 15, 16 and 17 (requires a valid EDB repository token).
When importing an existing PostgreSQL or TimescaleDB Streaming Replication, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (master, slave, TimescaleDB extension).
- Pull the configuration files.
- Generate the authentication key and register the nodes into ClusterControl.
Default configuration
By default, ClusterControl deploys PostgreSQL instances with the following configurations:
- Configure and load the
pg_stat_statements
module. - The WAL level is set to
replica
. - All replica will be set with
hot_standby
. - Configure the
cluster_name
value. - ClusterControl will configure the PostgreSQL instance with SSL encryption for client-server connections.
-
ClusterControl will create and grant necessary privileges for the following database users:
Database user Purpose cmon
Management and automation. cmonexporter
Prometheus exporter for database monitoring. cmonagent
ClusterControl query monitoring agent. cmon_replication
PostgreSQL streaming replication. backupuser
Backup and restore management.
Deployment steps
-
To import an existing PostgreSQL streaming replication, go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "PostgreSQL Streaming".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- User: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPERUSER role and allowed for localhost only.
- Password: The password for User.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you can specify the target database nodes and configure the database topology that you want to deploy:
- Primary node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. Only one primary node is allowed for single-primary replication.
- Replica nodes: Specify the IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. You can specify zero or more replica nodes.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
If you have more than one secondary or replica nodes, make sure to add more under Replica nodes text field.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
Import a three-node PostgreSQL 16 streaming replication, with operating system user "ubuntu" (the first node is the primary) with tags
production
andpostgres
. Then wait until the job is done while showing logs during the deployment process:s9s cluster --register \ --cluster-type=postgresql \ --nodes="10.10.10.11;10.10.10.12;10.10.10.13" \ --vendor=postgresql \ --provider-version=16 \ --db-admin='postgres' \ --db-admin-passwd='mYpa$$word' \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_rsa \ --with-tags='production;postgres' \ --cluster-name='PostgreSQL 16 - Streaming Replication' \ --wait --log
-
Create a four-node PostgreSQL EDB 16 streaming replication, using operating system user "root" (the first node is the primary) and let the deployment job run in the foreground:
s9s cluster --register \ --cluster-type=postgresql \ --nodes="192.168.5.11;192.168.5.12;192.168.5.13;192.168.5.14" \ --vendor=enterprisedb \ --provider-version=11.4 \ --db-admin='mydbuser' \ --db-admin-passwd='mydbPassw0rd' \ --enterprise-token='XXXXXXXXXXXXXXXXXXXXXXXXX' \ --os-user=root \ --os-key-file=/root/.ssh/id_rsa \ --cluster-name='System A - PostgreSQL EDB 16 - Streaming Replication' \ --wait
Recommended Next Steps
TimescaleDB
Imports a new TimescaleDB standalone or streaming replication cluster from ClusterControl. A minimum of two nodes is required for TimescaleDB streaming replication. If only one database IP address or hostname is provided, ClusterControl will deploy it as a standalone TimescaleDB server. The following vendors and versions are supported for a importing your existing cluster:
- TimescaleDB - 12, 13, 14, 15, 16 and 17.
Default configuration
By default, ClusterControl deploys TimescaleDB with the following configurations:
- Configure and load the
pg_stat_statements
module andtimescaledb
extension. - The WAL level is set to
replica
. - All replica will be set with
hot_standby
. - Configure the
cluster_name
value. - ClusterControl will configure the TimescaleDB instance with SSL encryption for client-server connections.
-
ClusterControl will create and grant necessary privileges for the following database users:
Database user Purpose cmon
Management and automation. cmonexporter
Prometheus exporter for database monitoring. cmonagent
ClusterControl query monitoring agent. cmon_replication
PostgreSQL streaming replication. backupuser
Backup and restore management.
Note
You can also import an existing PostgreSQL and convert it to TimescaleDB at a later stage. However, this action will be irreversible and ClusterControl will treat the cluster as TimescaleDB onwards.
Deployment steps
-
To import an existing TimescaleDB streaming replication, go to ClusterControl GUI → Deploy a cluster → Import a database cluster and under the Database dropdown, choose "TimescaleDB".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- User: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPERUSER role and allowed for localhost only.
- Password: The password for User.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you can specify the target database nodes and configure the database topology that you want to deploy:
- Primary node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. Only one primary node is allowed for single-primary replication.
- Replica nodes: Specify the IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. You can specify zero or more replica nodes.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
If you have more than one secondary or replica nodes, make sure to add more under Replica nodes text field.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
Import a three-node TimescaleDB 16 streaming replication, with operating system user "ubuntu" (the first node is the primary) with tags
production
andtimescaledb
:s9s cluster --register \ --cluster-type=postgresql \ --nodes="10.10.10.11;10.10.10.12;10.10.10.13" \ --vendor=postgresql \ --provider-version=16 \ --db-admin='mydbuser' \ --db-admin-passwd='mydbPassw0rd' \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_rsa \ --with-tags='production;timescaledb' \ --cluster-name='TimescaleDB 16 - Streaming Replication' \ --with-timescaledb
-
Create a standalone TimescaleDB 15, with operating system user "root" with tags
test
andtimescaledb
, and wait until the job finishes:s9s cluster --create \ --cluster-type=postgresql \ --nodes="192.168.99.11" \ --vendor=postgresql \ --provider-version=15 \ --db-admin='postgres' \ --db-admin-passwd='mYpa$$word' \ --os-user=root \ --os-key-file=/root/.ssh/id_rsa \ --with-tags='test;timescaledb' \ --cluster-name='TimescaleDB 16 - standalone' \ --with-timescaledb \ --wait
MongoDB Replica Set
ClusterControl is able to manage and monitor an existing MongoDB or Percona Server for MongoDB Replica Set.
When importing an existing MongoDB ReplicaSet, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (primary, secondary, arbiter).
- Pull the configuration files.
- Generate the authentication key and register the nodes into ClusterControl.
The database cluster will be automatically added to ClusterControl once import process is deployed successfully. If only one node is specified, ClusterControl will deploy it as a standalone MongoDB node. The following vendors and versions are supported for importing existing cluster:
- MongoDB Community - 4.4, 5.0, 6.0 and 7.0
- MongoDB Enterprise - 4.4, 5.0, 6.0 and 7.0
- Percona Server for MongoDB - 4.4, 5.0, 6.0 and 7.0
Attention
If SSL/TLS is required, ClusterControl only supports a proper CAFile configuration, as shown in the MongoDB documentation, and does not support the --allowInvalidCertificates
flag.
Default configuration
By default, ClusterControl deploys MongoDB Replica Set members with the following configurations:
- Configure
setParameter.enableLocalhostAuthBypass: true
inside MongoDB configuration file. - ClusterControl will create and grant necessary roles for additional MongoDB users -
admin.cmon_backup
for backup and restore purposes andadmin.cmonexporter
for query monitoring.
If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.
Attention
It is possible to deploy only 2 MongoDB nodes (without an arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server run as primary. Automatic failover works fine with 3 nodes and more.
Deployment steps
-
To import an existing MongoDB replica set, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "MongoDB ReplicaSet".
-
Under Cluster details, specify the cluster details you want to assign:
- Vendor: This is under Vendor and version label in the panel. This is the kind of vendor that your database cluster binaries for MongoDB ReplicaSet is deployed. This is required and you need to choose from Percona, MongoDB, and MongoDB Enterprise.
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- User: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPERUSER role and allowed for localhost only.
- Password: The password for User.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you can specify the target database nodes and configure the database topology that you want to deploy:
- Node: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
To import a 2-node MongoDB Replica Set using Percona binaries, you only need to specify the primary node. For example, if you have primary running on 192.168.40.190 and you have replica running on 192.168.40.191, no need to specify the replica as it will be auto-detected and will be imported by ClusterControl as well. The following command demonstrates this using
ubuntu
as it's OS username, usinged25519
digital signature algorihtm and wait then print the logs of the job deployment:s9s cluster --register \ --cluster-type=mongodb \ --nodes="192.168.40.190" \ --vendor=percona \ --provider-version='7.0' \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_ed25519 \ --db-admin='mydbuser' \ --db-admin-passwd='mydbPassw0rd' \ --cluster-name='MongoDB Percona Server ReplicaSet 7.0' \ --wait --log
-
To import a three-node MongoDB Enterprise Replica Set 7.0, with operating system user "root" and let the deployment job run in the foreground:
s9s cluster --register \ --cluster-type=mongodbenterprise \ --nodes="192.168.40.155" \ --vendor=mongodbenterprise \ --provider-version='7.0' \ --os-user=root \ --os-key-file=/root/.ssh/id_rsa \ --db-admin='mydbuser' \ --db-admin-passwd='mydbPassw0rd' \ --cluster-name='MongoDB Enterprise ReplicaSet 7.0'
MongoDB Sharded Cluster
ClusterControl is able to manage and monitor an existing MongoDB, Percona Server for MongoDB or MongoDB Enterprise 4.x ,5.x, 6.x and 7.x sharded cluster setup. When importing an existing MongoDB Sharded Cluster, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (mongos, config server, shard server - primary, secondary, arbiter).
- Pull the configuration files.
- Generate the authentication key and register the nodes into ClusterControl.
The following vendors and versions are supported for importing existing cluster:
- MongoDB Enterprise - 4.4, 5.0, 6.0 and 7.0
- MongoDB Community - 4.4, 5.0, 6.0 and 7.0
- Percona Server for MongoDB - 4.4, 5.0, 6.0 and 7.0
Default configuration
By default, ClusterControl deploys MongoDB Sharded Cluster with the following configurations:
- Configure
setParameter.enableLocalhostAuthBypass: true
inside MongoDB configuration file. - ClusterControl will create and grant necessary privileges for an additional MongoDB user -
admin.cmon_backup
for backup and restore purposes.
If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.
Attention
MongoDB Sharded Cluster does not support mongodump
backup method. Users will be asked to install Percona Backup for MongoDB (PBM) when creating or scheduling a backup for this cluster type after the deployment completes. Note that PBM requires a shared remote backup storage (e.g, NFS) accessible on all MongoDB nodes.
Deployment steps
-
To import an existing MongoDB Sharded Cluster, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "MongoDB Shards".
-
Under Cluster details, specify the cluster details you want to assign:
- Vendor: This is under Vendor and version label in the panel. This is the kind of vendor that your database cluster binaries for MongoDB ReplicaSet is deployed. This is required and you need to choose from Percona, MongoDB, and MongoDB Enterprise.
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- User: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPERUSER role and allowed for localhost only.
- Password: The password for User.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you will see the label Router/Mongos servers. This entails that you only have to specify the router/mongos server nodes. This is where you can specify the target database nodes that you want to deploy:
- Node router server: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
Specify only the MongoDB router servers (mongos) and ClusterControl will discover the rest of the config and shards members. If you have multiple MongoDB router servers, make sure to add them here if it's more than one node.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
To import a MongoDB Shard Cluster using Percona binaries consisting of 3-node router nodes using
ubuntu
as it's OS username, usinged25519
digital signature algorihtm and wait then print the logs of the job deployment:s9s cluster --register \ --cluster-type=mongodb \ --nodes="mongos://192.168.1.11;mongos://192.168.1.12;mongos://192.168.1.12;" --vendor=percona \ --provider-version='7.0' \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_ed25519 \ --db-admin='mydbuser' \ --db-admin-passwd='mydbPassw0rd' \ --cluster-name='MongoDB Percona Server Shards 7.0' \ --wait --log
Valkey Sentinel
Import existing Valkey instances with Sentinel for v7 and v8. When importing existing Valkey instances, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host fenvironment and operating system.
- Discover the database role of each node (primary, replica, sentinel).
Attention
ClusterControl does not support importing Valkey instances without authentication. Authentication must be enabled and the provided admin username and password must have ~* +@all
(all keys, all commands) privilege.
The following vendors and versions are supported for a importing existing cluster:
- Valkey Sentinel - 7 and 8
- Canonical - 7
For a very minimal setup (suitable for testing and experimenting), it is possible to import a single-node Valkey instance with Sentinel (both services co-located on the same node), and later can be scaled out with more Valkey database and Sentinel nodes.
Default configuration
By default, ClusterControl deploys Valkey instances with the following configurations:
- Valkey Sentinel (default port is 26379) will be co-located with the Valkey instances (default port is 6379).
- ClusterControl will configure the Valkey instance with
appendonly
enabled. - ClusterControl will secure the instance with authentication enabled and configure the
requirepass
andmasterauth
options. - ClusterControl will enable TLS encryption. To access the database nodes, one must use the
--tls
flag to connect. - The configuration
maxMemory
(70% of node's RAM, rounded to the nearest power of 2) andmaxMemoryPolicy=allkeys-lru
will be set, to reduce the risk of Valkey being killed by OOM.
Tips
Valkey Sentinel requires 3 nodes for automatic primary promotion. Sentinel can be co-located on the ClusterControl server if you want to deploy a two-node Valkey replication cluster (Sentinel will be co-located on each database instance).
Deployment steps
-
To import a new Valkey Sentinel cluster, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "Valkey Sentinel".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Valkey Port: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- Valkey Sentinel Port: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- Repository: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- Password: Under the Authentication pane, this is the primary node password used for client/replica authentication.
- Replication user: Under the Authentication pane, this is the replication user of your Valkey replication cluster.
- Replication password: Under the Authentication pane, this is the replication password of your Valkey replication cluster.
- Sentinel password: The password for your Sentinel user
sentinel-user
- Use existing certificates: Specify cmon's certificates directory to use on imported cluster. If ssl configuration is not completed scale up operations may fail.
- Cluster auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling cluster recovery enabled. Otherwise, off will not enable cluster recovery.
- Node auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling node recovery enabled. Otherwise, off will not enable node recovery.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, is where you can add your database nodes:
- Primary node: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- Replica nodes: Only specify the IP address or hostname of your replica node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
If you have more than one replica nodes, make sure to add more under Replica nodes text field.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
To import a Valkey Sentinel Cluster consisting of 3-nodes using
ubuntu
as it's OS username, using sentinel password assjl1215wi
and usinged25519
digital signature algorihtm and wait then print the logs of the job deployment:s9s cluster --register --cluster-type=valkey \ --nodes="valkey://192.168.40.190:6379; valkey://192.168.40.191:6379;valkey://192.168.40.192:6379; \ valkey-sentinel://192.168.40.190:26379;valkey-sentinel://192.168.40.191:26379; \ valkey-sentinel://192.168.40.192:26379;" \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_ed25519 \ --sentinel-passwd sjl2l5wi\ --vendor=valkey --provider-version=8 \ --cluster-name="My Valkey Sentinel v8" \ --log --wait
Valkey Cluster
Import existing Valkey Cluster instances for v7 and v8. When importing existing Valkey instances, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (primary, replica).
Attention
ClusterControl does not support importing Valkey instances without authentication. Authentication must be enabled and the provided admin username and password must have ~* +@all
(all keys, all commands) privilege.
The following vendors and versions are supported for a importing existing cluster:
- Valkey Cluster - 7 and 8
- Canonical - 7
Default configuration
By default, ClusterControl deploys Valkey cluster instances with the following configurations:
- ClusterControl will configure the Valkey instances with
appendonly
enabled. - ClusterControl will secure the instance with AUTH enabled and configure the
requirepass
andmasterauth
options. - ClusterControl will enable TLS encryption. To access the cluster, one must use the
--tls
flag to connect. - The configuration
maxMemory
(70% of node's RAM, rounded to the nearest power of 2) andmaxMemoryPolicy=allkeys-lru
will be set, to reduce the risk of Valkey being killed by OOM.
Deployment steps
-
To import an existing Valkey Cluster, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "Valkey Cluster".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
- Username: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPERUSER role and allowed for localhost only.
- Password: The password for User.
- Cluster auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling cluster recovery enabled. Otherwise, off will not enable cluster recovery.
- Node auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling node recovery enabled. Otherwise, off will not enable node recovery.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you will see the label Router/Mongos servers. This entails that you only have to specify the router/mongos server nodes. This is where you can specify the target database nodes that you want to deploy:
- Cluster node: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
Specify one node and ClusterControl will discover the rest of members.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
To import a Valkey Sharded Cluster, you only need to specify the primary node. For example, if you have primary running on 192.168.40.198 and you have replica running on 192.168.40.199, no need to specify the replica as it will be auto-detected and will be imported by ClusterControl as well. The following command demonstrates this using
ubuntu
as it's OS username, usinged25519
digital signature algorihtm and wait then print the logs of the job deployment:
Redis Sentinel
Import existing Redis instances with Sentinel for v7 and v8. When importing existing Redis instances, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host fenvironment and operating system.
- Discover the database role of each node (primary, replica, sentinel).
Attention
ClusterControl does not support importing Redis instances without authentication. Authentication must be enabled and the provided admin username and password must have ~* +@all
(all keys, all commands) privilege.
The following vendors and versions are supported for a importing existing cluster:
- Redis Sentinel - 6 and 7
Default configuration
By default, ClusterControl deploys Redis instances with the following configurations:
- Redis Sentinel (default port is 26379) will be co-located with the Redis instances (default port is 6379).
- ClusterControl will configure the Redis instance with
appendonly
enabled. - ClusterControl will secure the instance with authentication enabled and configure the
requirepass
andmasterauth
options. - ClusterControl will enable TLS encryption. To access the database nodes, one must use the
--tls
flag to connect. - The configuration
maxMemory
(70% of node's RAM, rounded to the nearest power of 2) andmaxMemoryPolicy=allkeys-lru
will be set, to reduce the risk of Redis being killed by OOM.
Tips
Redis Sentinel requires 3 nodes for automatic primary promotion. Sentinel can be co-located on the ClusterControl server if you want to deploy a two-node Redis replication cluster (Sentinel will be co-located on each database instance).
Deployment steps
-
To import a new Redis Sentinel cluster, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "Redis Sentinel".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Redis Port: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- Redis Sentinel Port: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- Repository: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- Password: Under the Authentication pane, this is the primary node password used for client/replica authentication.
- Replication user: Under the Authentication pane, this is the replication user of your Redis replication cluster.
- Replication password: Under the Authentication pane, this is the replication password of your Redis replication cluster.
- Sentinel password: The password for your Sentinel user
sentinel-user
- Use existing certificates: Specify cmon's certificates directory to use on imported cluster. If ssl configuration is not completed scale up operations may fail.
- Cluster auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling cluster recovery enabled. Otherwise, off will not enable cluster recovery.
- Node auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling node recovery enabled. Otherwise, off will not enable node recovery.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, is where you can add your database nodes:
- Primary node: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
- Replica nodes: Only specify the IP address or hostname of your replica node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
If you have more than one replica nodes, make sure to add more under Replica nodes text field.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
To import a Redis Sentinel Cluster consisting of 3-nodes using
ubuntu
as it's OS username, using sentinel password assjl1215wi
and usinged25519
digital signature algorihtm and wait then print the logs of the job deployment:s9s cluster --register --cluster-type=redis \ --nodes="redis://192.168.40.190:6379; redis://192.168.40.191:6379;redis://192.168.40.192:6379; \ redis-sentinel://192.168.40.190:26379;redis-sentinel://192.168.40.191:26379; \ redis-sentinel://192.168.40.192:26379;" \ --os-user=ubuntu \ --os-key-file=/home/ubuntu/.ssh/id_ed25519 \ --sentinel-passwd sjl2l5wi\ --vendor=redis --provider-version=8 \ --cluster-name="My Redis Sentinel Cluster" \ --log --wait
Redis Cluster
Import existing Redis Cluster instances for v7 and v8. When importing existing Redis instances, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (primary, replica).
Attention
ClusterControl does not support importing Redis instances without authentication. Authentication must be enabled and the provided admin username and password must have ~* +@all
(all keys, all commands) privilege.
The following vendors and versions are supported for a importing existing cluster:
- Redis Cluster - 7 and 8
- Canonical - 7
Default configuration
By default, ClusterControl deploys Redis cluster instances with the following configurations:
- ClusterControl will configure the Redis instances with
appendonly
enabled. - ClusterControl will secure the instance with AUTH enabled and configure the
requirepass
andmasterauth
options. - ClusterControl will enable TLS encryption. To access the cluster, one must use the
--tls
flag to connect. - The configuration
maxMemory
(70% of node's RAM, rounded to the nearest power of 2) andmaxMemoryPolicy=allkeys-lru
will be set, to reduce the risk of Redis being killed by OOM.
Deployment steps
-
To import an existing Redis Cluster, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "Redis Cluster".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Server port: The database server port that ClusterControl shall configure on all database nodes.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
- Username: The database admin username that ClusterControl shall configure on all database nodes. This user will be granted global SUPERUSER role and allowed for localhost only.
- Password: The password for User.
- Cluster auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling cluster recovery enabled. Otherwise, off will not enable cluster recovery.
- Node auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling node recovery enabled. Otherwise, off will not enable node recovery.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you will see the label Router/Mongos servers. This entails that you only have to specify the router/mongos server nodes. This is where you can specify the target database nodes that you want to deploy:
- Cluster node: Only specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
Specify one node and ClusterControl will discover the rest of members.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
-
To import a Redis Sharded Cluster, you only need to specify the primary node. For example, if you have primary running on 192.168.40.188 and you have replica running on 192.168.40.189, no need to specify the replica as it will be auto-detected and will be imported by ClusterControl as well. The following command demonstrates this using
ubuntu
as it's OS username, usinged25519
digital signature algorihtm and wait then print the logs of the job deployment:
Microsoft SQL Server
Import existing Microsoft SQL Server – 2019 and 2022 for Linux. When importing existing SQL Server instances, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (primary, replica).
The following vendors and versions are supported for a new deployment:
- Microsoft SQL Server for Linux - 2019 and 2022
Attention
The minimum memory requirement for a Microsoft SQL Server database node shall have at least 2000 MB of memory. ClusterControl will abort the deployment job if this requirement is not met.
Default configuration
By default, ClusterControl deploys SQL Server with the following configurations:
- All database nodes must use fully qualified domain name (FQDN). IP address is not supported.
- Minimum available RAM is 1800 MB per database node.
- Enforces the SQL Server user's password policy as shown here.
- At the moment, ClusterControl only deploys AlwaysOn with asynchronous-commit mode, where it does not wait for any secondary replica to write incoming transaction log records to disk.
- ClusterControl will set up the SQL Server using the Evaluation edition (free, 180-day limit).
Deployment steps
-
To import an existing SQL Server, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "SQL Server".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- Admin Username: The username of your admin user.
- Admin Password: The password for the admin user.
- Server port: This port is disabled but this serves as your reference to what port should be used during import.
- Cluster auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling cluster recovery enabled. Otherwise, off will not enable cluster recovery.
- Node auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling node recovery enabled. Otherwise, off will not enable node recovery.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you will see the label Router/Mongos servers. This entails that you only have to specify the router/mongos server nodes. This is where you can specify the target database nodes that you want to deploy:
- Primary node: Specify the IP address or hostname of the primary database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. Only one primary node is allowed for single-primary replication.
- Replica nodes: Specify the IP address or hostname of the replica database node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. You can specify zero or more replica nodes.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
If you have more than one replica nodes, make sure to add more under Replica nodes text field.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
- Importing is currently not implemented for SQL Server cluster
Elasticsearch
Import existing Elasticsearch Cluster instances for 7.x and 8.x. When importing existing Elasticsearch instances, ClusterControl will perform the following:
- Verify SSH connectivity to all nodes.
- Detect the host environment and operating system.
- Discover the database role of each node (Master, data).
Attention
ClusterControl does not support importing Elasticsearch instances without authentication. Authentication must be enabled and the provided sysadmin username and password.
The following vendor and versions are supported for a new deployment:
- Elastic - 7.1, 8.1 and 8.3
Default configuration
By default, ClusterControl deploys Elasticsearch with the following configurations:
- For high-availability cluster setup, ClusterControl will configure an NFS server on one of the Elasticsearch nodes and mount the shared filesystem on all data nodes. This is for snapshot backup and restoration.
- For single-node cluster setup, ClusterControl will create a local path for snapshot backup and restoration.
Attention
The minimum memory requirement for an Elasticsearch master node, data node, or master-data node is 1576 MB. ClusterControl will abort the deployment job if this requirement is not met. See Hardware Requirement for Elasticsearch.
Deployment steps
-
To import a new Elasticsearch cluster, go to ClusterControl GUI → Deploy a cluster → Create a database cluster and under the Database dropdown, choose "Elasticsearch".
-
Under Cluster details, specify the cluster details you want to assign:
- Name: This is optional. This shall be the name of your cluster which lables with Name your cluster. ONce import is done, ClusterControl will use this as its registry name of the cluster.
- Tags: Add tags to search or group your database clusters
-
Click Continue.
-
Under the SSH configuration section, specify the SSH credentials that ClusterControl should use to connect to the database nodes:
- SSH user: The SSH user that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH user.
- SSH user key path: The SSH private key path that ClusterControl will use to perform SSH authentication to the database node. Relative path is not supported. The SSH private key must be physically secured and existed on the ClusterControl node.
- SSH port: The SSH port that ClusterControl will use to perform SSH to the database node. ClusterControl assumes that all database nodes are using the same SSH port.
- SSH sudo password: The sudo password if the SSH user requires a password for privilege escalation.
-
Click Continue to proceed to the next step.
-
Under the Node configuration section, specify the database credentials and configurations that ClusterControl shall use when deploying the cluster:
- HTTP port: This port is disabled but this serves as your reference to what port should be used during import.
- Certificates Password: If specified, the password for a preexisting CA private key
- Admin Username: Enter the sysadmin username you use in your Elasticsearch cluster.
- Admin Password: Enter the sysadmin password you use in your Elasticsearch cluster.
- Repository: Choose "Use vendor repositories" (default) will let ClusterControl provision software by setting up and using the database vendor's preferred software repository. ClusterControl will always install the latest version of what is provided at that moment. Choose "Do not setup vendor repositories" if you have a special configuration from the vendor (commonly for enterprise databases) and ClusterControl will skip the repository configuration part.
- Use existing certificates: Specify cmon's certificates directory to use on imported cluster. If empty, CC will search on configuration folder '/etc/elasticsearch' or default generation folder
/usr/share/elasticsearch
to get certificate files (like elastic-certificates.p12, elastic-stack-ca.p12 and elasticsearch-ssl-http.zip ) on node to configure certificates on new nodes in cluster. If ssl configuration is not completed scale up operations may fail. - Cluster auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling cluster recovery enabled. Otherwise, off will not enable cluster recovery.
- Node auto-recovery: Under Advanced configuration, you will see this option allowing you to turn it on/off. When on, it will import the cluster enabling node recovery enabled. Otherwise, off will not enable node recovery.
-
Click Continue to proceed to the next step.
-
Under the Add nodes section, you will see the label Router/Mongos servers. This entails that you only have to specify the router/mongos server nodes. This is where you can specify the target database nodes that you want to deploy:
- Cluster node: Specify the IP address or hostname of the primary or master-data node. Press Enter to add the node, where ClusterControl will perform a pre-deployment check to verify if the node is reachable via SSH key-based authentication. If the target node has more than one network interface, you will be able to select or enter a separate IP address to be used only for database traffic. Only one primary node is allowed for single-primary replication.
Note
You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
Note
Specify one node and ClusterControl will discover the rest of members. Specified node will be used to search for certificate files.
-
Click Continue to proceed to the Preview page. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the cluster by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.
- Importing is currently not implemented for Elasticsearch cluster