Manual Installation
If you want to have more control over the installation process, you may perform a manual installation. ClusterControl requires a number of packages to be installed and configured, as described in the following list:
clustercontrol2
– ClusterControl v2 web user interface.clustercontrol-controller
– ClusterControl CMON controller.clustercontrol-notifications
– ClusterControl notification module, if you would like to integrate with third-party tools like PagerDuty and Slack.clustercontrol-ssh
– ClusterControl web-based SSH module, if you would like to access the host via SSH directly from ClusterControl UI.clustercontrol-cloud
– ClusterControl cloud module, if you would like to manage your cloud instances directly from ClusterControl UI.clustercontrol-proxy
– ClusterControl controller proxying service for ClusterControl Operations Center.clustercontrol-clud
– ClusterControl cloud file manager module, if you would like to upload and download backups from cloud storage. It requiresclustercontrol-cloud
.s9s-tools
– ClusterControl CLI client, if you would like to manage your cluster using a command-line interface.
Note
Installing and uninstalling ClusterControl should not bring any downtime to the managed database cluster.
Requirements
Make sure the following is ready prior to this installation:
- Verify that sudo is working properly if you are using a non-root user.
- ClusterControl node must be able to access all database nodes via passwordless SSH.
- You must have an internet connection on the ClusterControl node during the installation process. Otherwise, see Offline Installation.
Installation Steps
Steps described in the following sections should be performed on the ClusterControl node unless specified otherwise.
-
Setup ClusterControl repository.
-
Disable SElinux and open required ports (or stop iptables):
-
Install required packages via package manager:
# RHEL/CentOS/Rocky/Alma 9 $ yum -y install wget dmidecode hostname python3 mariadb mariadb-server httpd mod_ssl $ alternatives --set python /usr/bin/python3 # RHEL/CentOS 8 $ yum -y install wget dmidecode hostname python36 mariadb mariadb-server httpd mod_ssl $ alternatives --set python /usr/bin/python3 # RHEL/CentOS 7 $ yum -y install wget dmidecode python jq mariadb mariadb-server httpd mod_ssl # RHEL/CentOS 6 $ yum -y install wget dmidecode python jq mariadb mariadb-server httpd mod_ssl
-
Install EPEL packages
-
Install ClusterControl packages:
-
Start MySQL server (MariaDB for RHEL 7/8 or CentOS 7/8), enable it on boot and set a MySQL root password:
-
Create the
cmon
user and grant the cmon user:$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' $ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' $ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' $ mysql -uroot -p -e 'FLUSH PRIVILEGES'
Replace
{cmonpassword}
with respective value and{controller_ip_address}
with the valid FQDN or IP address of the ClusterControl node. -
Generate a ClusterControl key to be used by
RPC_TOKEN
: -
Initialize the
cmon
service by running below command:$ cmon --init \ --mysql-hostname="127.0.0.1" \ --mysql-port="3306" \ --mysql-username="cmon" \ --mysql-password="{cmonpassword}" \ --mysql-database="cmon" \ --hostname="{ClusterControl Primary IP Address}" \ --rpc-token="{ClusterControl API key as generated above}" \ --controller-id="clustercontrol"
!!! warning "Attention"
The value of the `hostname` must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. The `cmon` user password taken from previous step when creating the user.
-
ClusterControl event and cloud modules require their service definition inside
/etc/default/cmon
. Create the file and add the following lines: -
Create temporary directory SSL and certificate:
-
Generate self signed certificates.
$ openssl genrsa -out /tmp/ssl/server.key 2048 $ openssl req -new -key /tmp/ssl/server.key -out /tmp/ssl/server.csr \ -addext "subjectAltName = DNS:dev.severalnines.local" \ -subj "/C=SE/ST=Stockholm/L=Stockholm/O='Severalnines AB'/OU=Severalnines/CN=*.severalnines.local/[email protected]" $ openssl x509 -req -extfile /tmp/ssl/v3.ext -days 1825 -sha256 -in /tmp/ssl/server.csr -signkey /tmp/ssl/server.key -out /tmp/ssl/server.crt
-
Copy the certificate into default ClusterControl default directory:
-
Configure header origin in the
/etc/httpd/conf.d/security.conf
: -
Copy the Apache configuration file
cc-frontend.conf
from/usr/share/cmon/apache/
to/etc/httpd/conf.d/cc-frontend.conf
: -
Replace the default URL with the correct
hostname
:$ sed -i "s|https://cc2.severalnines.local:9443.*|https://{controller_ip_address}\/|g" /etc/httpd/conf.d/cc-frontend.conf $ sed -i "s|Listen 9443|#Listen 443|g" /etc/httpd/conf.d/cc-frontend.conf $ sed -i "s|9443|443|g" /etc/httpd/conf.d/cc-frontend.conf
Replace
{controller_ip_address}
with valid FQDN or IP address of the ClusterControl node. -
Enable ClusterControl and Apache daemons on boot and start them:
-
Configure repository
s9s-tools
(if you already followed step 2, skip this step):$ cat > /etc/yum.repos.d/s9s-tools.repo << EOF [s9s-tools] name=s9s-tools (CentOS_8) type=rpm-md baseurl=https://repo.severalnines.com/s9s-tools/{os_codename} gpgcheck=1 gpgkey=https://repo.severalnines.com/s9s-tools/{os_codename}/repodata/repomd.xml.key enabled=1 EOF
Replace
{os_codename}
based on the list of Operating System code below:RHEL_7
,RHEL_8
,RHEL_9
,CentOS_7
,CentOS_8
,CentOS_9
. -
Install the ClusterControl CLI (If you already performed step 2, skip this step):
-
Create ccsetup user for registration new account.
$ export S9S_USER_CONFIG=/tmp/ccsetup.conf $ s9s user --create --new-password=admin --group=admins --email-address="{your_email_address}" --controller="https://localhost:9501" ccsetup $ unset S9S_USER_CONFIG
The email
{your_email_address}
is used to register new account. -
Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the root user to connect to the managed hosts. To generate an SSH key for the root user, do:
Note
If you are running as sudoer, the default SSH key will be located under
/home/$USER/.ssh/id_rsa
. See Operating System User. -
Before creating or importing a database server/cluster into ClusterControl, set up passwordless SSH from the ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:
Replace
{SSH user}
and{IP address of the target node}
with appropriate values. Repeat the command for all target hosts. -
Open ClusterControl UI at
https://ClusterControl_host/
and create the default admin password by providing a valid email address and password. You will be redirected to the ClusterControl default page.
The following steps should be performed on the ClusterControl node unless specified otherwise. Omit sudo if you are installing as the root user.
-
Setup ClusterControl repository.
-
If you have AppArmor running, disable it and open the required ports (or stop iptables):
-
Install ClusterControl dependencies:
-
Install the ClusterControl controller package:
-
Start MySQL server, enable it on boot and set a MySQL root password:
-
Create the
cmon
user and grant the cmon user:$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' $ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' $ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION' $ mysql -uroot -p -e 'FLUSH PRIVILEGES'
-
Generate a ClusterControl key to be used by
RPC_TOKEN
: -
Initialize the
cmon
service by running below command:$ cmon --init \ --mysql-hostname="127.0.0.1" \ --mysql-port="3306" \ --mysql-username="cmon" \ --mysql-password="{cmonpassword}" \ --mysql-database="cmon" \ --hostname="{ClusterControl Primary IP Address}" \ --rpc-token="{ClusterControl API key as generated above}" \ --controller-id="clustercontrol"
Attention
The value of the
hostname
must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. Thecmon
user password taken from previous step when creating the user. -
ClusterControl’s event and cloud modules require
/etc/default/cmon
for service definition. Create the file and add the following lines: -
Create temporary directory SSL and certificate:
-
Generate self signed certificates.
$ openssl genrsa -out /tmp/ssl/server.key 2048 $ openssl req -new -key /tmp/ssl/server.key -out /tmp/ssl/server.csr \ -addext "subjectAltName = DNS:dev.severalnines.local" \ -subj "/C=SE/ST=Stockholm/L=Stockholm/O='Severalnines AB'/OU=Severalnines/CN=*.severalnines.local/[email protected]" $ openssl x509 -req -extfile /tmp/ssl/v3.ext -days 1825 -sha256 -in /tmp/ssl/server.csr -signkey /tmp/ssl/server.key -out /tmp/ssl/server.crt
-
Copy the certificate into default ClusterControl default directory:
-
Configure header origin in the
/etc/apache2/conf-available/security.conf
: -
Replace the default URL with the correct
hostname
:$ sed -i "s|^[ \t]*ServerName.*| ServerName {controller_ip_address}|g" /etc/apache2/sites-available/cc-frontend.conf $ sed -i "s|https://cc2.severalnines.local:9443.*|https://{controller_ip_address}\/|g" /etc/apache2/sites-available/cc-frontend.conf $ sed -i "s|Listen 9443|#Listen 443|g" /etc/apache2/sites-available/cc-frontend.conf $ sed -i "s|9443|443|g" /etc/apache2/sites-available/cc-frontend.conf
Replace
{controller_ip_address}
with valid FQDN or IP address of the ClusterControl node. -
Enable required Apache modules and create a symlink to sites-enabled for default HTTPS virtual host:
-
Restart Apache webserver to apply the changes:
-
Enable ClusterControl on boot and start them:
-
Configure repository
s9s-tools
:$ wget -qO - http://repo.severalnines.com/s9s-tools/{os_codename}/Release.key | apt-key add - $ echo "deb http://repo.severalnines.com/s9s-tools/{os_codename}/ ./" | tee /etc/apt/sources.list.d/s9s-tools.list
Replace
{os_codename}
based on the list of Operating System code below:focal
,disco
,bionic
,xenial
,jammy
. -
Install the
s9s-tools
-
Create
ccsetup
user for registration new account. -
Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the ‘root’ user to connect to the managed hosts. To generate an SSH key for the root user, do:
Note
If you are running as sudoer, the default SSH key will be located under
/home/$USER/.ssh/id_rsa
. See Operating System User. -
Before importing a database server/cluster into ClusterControl or deploy a new cluster, set up passwordless SSH from ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:
Replace {SSH user} and {IP address of the target node} with appropriate values. Repeat the command for all target hosts.
-
Open ClusterControl UI at
https://{ClusterControl_host}/
and create the default admin password by providing a valid email address and password. You will be redirected to the ClusterControl default page.
The installation is complete and you can start to import existing or deploy a new database cluster. See User Guide to start using ClusterControl.