Online Installation
There are several ways to get ClusterControl installed and running in your infrastructure:
- Installer script (install-cc)
- ClusterControl Helm Charts
- Ansible Role
- Puppet Module
Installer script (install-cc)
The installer script is the recommended way to install ClusterControl, and it is the easiest way to get ClusterControl up and running on the supported operating system. For example, see Quickstart.
The script must be downloaded and executed on the ClusterControl node, which performs all necessary steps to install and configure ClusterControl’s packages and dependencies on that particular host. It also supports offline installation with NO_INET=1
variable exported, however, you need to have a mirrored repository enabled, or MySQL/MariaDB and Apache must be installed and running on that host beforehand. See Offline Installation for details. The script assumes that the host can install all dependencies via the operating system's repositories.
Note
Starting from ClusterControl 1.9.7 (September 2023), ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. For new installations, the installer script will only install GUI v2, skipping GUI v1. If you would like to run GUI v1, see Legacy GUI Installation. If you are upgrading from an older version, the GUI v1 will remain available and usable as usual.
We encourage users to go to the ClusterControl download page and download the installer script from there (user registration required). Once registered, you will see the installation instructions similar to what is described in this section.
On the ClusterControl server, run the following commands:
wget https://severalnines.com/downloads/cmon/install-cc
chmod +x install-cc
sudo ./install-cc # omit sudo if you run as root
Info
The installation script will attempt to automate the following tasks:
- Install and configure a MySQL/MariaDB server (used by ClusterControl to store management and monitoring data).
- Install and configure the ClusterControl packages via the package manager.
- Install ClusterControl dependencies via the package manager.
- Configure Apache and SSL.
- Configure SELinux (Red Hat-based distributions only)
- Configure ClusterControl RPC token.
- Configure ClusterControl Controller (CMON) with minimal configuration options.
- Enable the CMON service on boot and start it up.
After the installation completes, open your web browser to https://{ClusterControl_host}/
and create the default admin user by specifying a username (username "admin" is reserved) and password on the welcome page.
Environment variables
The installer script also understands a number of environment variables if defined. Supported environment variables are:
Variables | Description |
---|---|
S9S_CMON_PASSWORD |
MySQL cmon user password. |
S9S_ROOT_PASSWORD |
MySQL root user password of the node. |
S9S_DB_PORT |
MySQL port for cmon to connect. |
HOST |
Primary IP address or FQDN of the host. Useful if the host has multiple IP addresses. |
NO_INET |
Special flag to indicate an offline installation. A mirrored repository must be enabled, or MySQL/MariaDB and Apache must be installed and running on the host beforehand. |
INNODB_BUFFER_POOL_SIZE |
MySQL InnoDB buffer pool size to be configured on the host. The default is 50% of the host’s RAM. |
CLUSTERCONTROL_BUILD |
ClusterControl builds (other than the controller). Separate each package with space. |
CONTROLLER_BUILD |
ClusterControl controller build. |
S9S_TOOLS_BUILD |
ClusterControl CLI (a.k.a s9s) build. |
The environment variable can be set through the export
command or by prefixing the install command as shown in the Example Use Cases section.
Example use cases
If you have multiple network interface cards, you may immediately assign a primary IP address using the HOST
variable as per the example below:
By default, the script will allocate 50% of the host’s RAM to the InnoDB buffer pool. You may change this by assigning a value in MB for INNODB_BUFFER_POOL_SIZE
variable as the example below:
If you want to perform a single-liner non-interactive installation, you can assign each mandatory variable with its value beforehand, similar to the example below:
If you want to install a specific version instead of the latest in the repository, you can use CLUSTERCONTROL_BUILD
, CONTROLLER_BUILD
and S9S_TOOLS_BUILD
environment variables. You can get the available package name and version from the ClusterControl download site.
Examples are as follows:
CLUSTERCONTROL_BUILD="clustercontrol-1.7.1-5622 clustercontrol-cloud-1.7.1-163 clustercontrol-clud-1.7.1-163 clustercontrol-notifications-1.7.1-159 clustercontrol-ssh-1.7.1-70" \
CONTROLLER_BUILD="clustercontrol-controller-1.7.1-2985" \
S9S_TOOLS_BUILD="s9s-tools-1.7.20190117-release1" \
./install-cc
Helm Chart
This Helm chart is designed to provide everything you need to get ClusterControl running in a vanila Kubernetes cluster. This includes dependencies like:
- Ingress-Nginx Controller
- MySQL Operator
- VictoriaMetrics Operator
If you do not wish to install any of those, see Helm chart dependencies.
Installation using Helm chart
-
Add a helm chart repository with the following commands:
-
Create a namespace for ClusterControl. It's also required for MySQL Operator to run in a custom namespace (not default):
-
Install ClusterControl using Helm:
The installation will generate an example SSH key for you to use to access your database nodes. However, you can use your own SSH key as described below.
Providing your own SSH keys for ClusterControl to use
The ClusterControl Helm chart provides an example SSH key for you to use, however, you should provide your SSH keys for ClusterControl to use and connect to your target machines. These should already be configured on the target server's authorized_keys
.
-
Create k8s secrets with your SSH keys:
key1
is the filename of your SSH key in ClusterControl - this will be created under/root/.ssh-keys-user
Note
You can use multiple
--from-file
- be sure to provide unique key names -key1
,key2
,key3
-
Install or upgrade ClusterControl by providing
cmon.sshKeysSecretName
value with our secret name created above:
Custom configuration via values.yaml
To create your own override file, export it from Helm using the show command:
Look at the generated values.yaml
and customize your configuration here. To install or upgrade using your custom values.yaml
, include the override file with the -f
flag:
Note
- ClusterControl CMON API is accessible within the cluster via
cmon-master:9501
. - ClusterControl GUI is accessible within the cluster via
cmon-master:3000
. - We highly recommend ingress as ClusterControl GUI requires CMON API to be exposed and available externally.
Helm chart dependencies
Oracle MySQL Operator or NGINX ingress controller
If you already have Oracle MySQL Operator or NGINX ingress controller installed, you can set the following flags:
helm install clustercontrol s9s/clustercontrol --debug \
--set fqdn=clustercontrol.example.com \
--set installMysqlOperator=false \
--set ingressController.enabled=false
This Helm chart has dependencies that make ClusterControl easier to install. None of these is necessary if you provide your equivalent or you already have it installed:
-
oracle-mysql-operator
-
Oracle MySQL operator is required for running the MySQL database within the Kubernetes cluster.
-
You can disable this by setting:
installMysqlOperator: false
-
-
oracle-mysql-innodbcluster
- A MySQL InnoDB Cluster is required for ClusterControl.
- You must provide a different MySQL/MariaDB for ClusterControl to use. Please refer to the official Helm chart documentation for MySQL InnoDB Cluster.
- You can disable this by setting:
createDatabases: false
.
-
nginx-ingress-controller
- An ingress controller to access ClusterControl.
- For more information, please refer to the official Helm chart documentation for the NGINX-Ingress Controller.
-
If you already have an ingress controller installed or wish to use a different one, you can disable this by:
VictoriaMetrics or other Prometheus-compatible monitoring
If you wish to use your VictoriaMetrics or other Prometheus-compatible monitoring systems, please refer to victoria-metrics-single parameters. These defaults provide a minimal need for ClusterControl metrics and dashboards to work. Feel free to adjust as needed, however, keep in mind the required labels annotations, and service discovery. If you already have your own VictoriaMetrics or Prometheus cluster and don't want to install this, you can disable it by setting the following inside values.yaml
:
Ansible Role
If you are automating your infrastructure using Ansible, we have created a role for this purpose and it is available at Ansible Galaxy. This role also supports deploy a new cluster and import an existing cluster into ClusterControl automatically, as shown under Example Playbooks.
See also
Getting the role is as easy as:
Usage
-
Get the ClusterControl Ansible role from Ansible Galaxy or Github.
-
Create a playbook. See Example playbooks section.
-
Run the playbook.
-
Once ClusterControl is installed, go to
https://{ClusterControl_host}/
and create the default admin user/password. -
On ClusterControl node, set up SSH key-based authentication to all target DB nodes. For example, if ClusterControl node is 192.168.0.10 and DB nodes are 192.168.0.11, 192.168.0.12 and 192.168.0.13:
-
Start to deploy a new database cluster or add an existing cluster.
Example playbooks
The simplest playbook would be:
If you would like to specify custom configuration values as explained above, create a file called vars/main.yml
and include it inside the playbook:
- hosts: 192.168.10.15
vars:
- vars/main.yml
roles:
- { role: severalnines.clustercontrol, tags: controller }
Inside vars/main.yml
:
controller: true
mysql_root_username: admin
mysql_root_password: super-user-password
cmon_mysql_password: super-cmon-password
cmon_mysql_port: 3307
If you are running as another user, ensure the user has the ability to escalate as a superuser via sudo. Example playbook for Ubuntu 20.04 with sudo password enabled:
- hosts: [email protected]
become: yes
become_user: root
roles:
- { role: severalnines.clustercontrol, tags: controller }
Then, execute the command with the --ask-become-pass
flag, for example:
Puppet module
This module installs ClusterControl for your new database node/cluster deployment or on top of your existing database node/cluster. The Puppet module for ClusterControl automates the following actions based on the scope of this module:
- Setup ClusterControl required repositories
- Download and installs dependencies. These dependencies are required by its compontents such as Apache, PHP, OpenSSL, MySQL/MariaDB are among its basic packages required.
- Install ClusterControl components such as the controller, front-end, CMON cloud, CMON SSH, and CMON Clud (upload/download cloud link)
- Automates configuration for Apache
- configures the for port 80 and port 443 (for SSL/TLS). This includes its designated rewrite rules needed
- ensures port 443 is enabled
- enables header module setting X-Frame-Options: sameorigin
- check permission for ClusterControl UI and install SSL.
- Automates MySQL/MariaDB installation.
- create CMON DB, grant cmon user and configure DB for ClusterControl UI.
Requirements
Make sure you meet following criteria prior to the deployment:
- ClusterControl node must run on a clean dedicated host with internet connection.
- If you are running as non-root user, make sure the user is able to escalate to root with sudo command.
- For SUSE(SLES) or OpenSUSE Linux, make sure you install tye zypprepo module (Checkout Zypprepo here). You can do that by installing it to your puppet master as follows
Installation using Puppet
Our ClusterControl module for Puppet is available either on Puppet Forge or using this Severalnines Puppet by cloning or downloading it as a zip. Then place it under puppetlabs modulepath
directory and make sure to name your module directory as clustercontrol
, for example /etc/puppetlabs/code/environments/production/modules/clustercontrol
.
-
Generate an API Token. To do this, go to $modulepath/clustercontrol/files, then run the following. For example,
$ cd /etc/puppetlabs/code/environments/production/modules/clustercontrol $ files/s9s_helper.sh --generate-token efc6ac7fbea2da1b056b901541697ec7a9be6a77
Reserve that token which you will use that as an input parameter for your manifests file.
-
Let say you have the following hosts:
-
Create a manifests file, let say we named it
clustercontrol.pp
: -
Have the following example puppet agent node below where the hostname of the target to install ClusterControl using puppet is 192.168.40.90. Below is the definition for ClusterControl as follows:
node 'clustercontrol.puppet.local' { # Applies only to mentioned node. If nothing mentioned, applies to all. class { 'clustercontrol': is_controller => true, cc_hostname => '192.168.40.90', mysql_cmon_password => 'R00tP@55', api_token => 'efc6ac7fbea2da1b056b901541697ec7a9be6a77', ssh_user => 'vagrant', ssh_user_group => 'vagrant', only_cc_v2 => true } }
-
Run the following command on the target clustercontrol, which is the host
clustercontrol.puppet.local
for this example installation:
Once deployment is complete, open the ClusterControl GUI at https://<ClusterControl_IP_address_or_hostname>/
and create the first admin user. You can start to add existing database node/cluster, or deploy a new one. Ensure that SSH key-based authentication is configured properly from ClusterControl node to all DB nodes beforehand.
This module only supports bootstrapping MySQL servers with IP address only (it expects skip-name-resolve
is enabled on all MySQL nodes).
More info
- ClusterControl Puppet Module Github page.
- For more examples of deployments using Puppet, see Puppet Module for ClusterControl - Adding Management and Monitoring to your Existing Database Clusters.