Table of Contents
If you want to have more control over the installation process, you may perform a manual installation. ClusterControl requires a number of packages to be installed and configured, as described in the following list:
clustercontrol2
– ClusterControl v2 web user interface.clustercontrol-controller
– ClusterControl CMON controller.clustercontrol-notifications
– ClusterControl notification module, if you would like to integrate with third-party tools like PagerDuty and Slack.clustercontrol-ssh
– ClusterControl web-based SSH module, if you would like to access the host via SSH directly from ClusterControl UI.clustercontrol-cloud
– ClusterControl cloud module, if you would like to manage your cloud instances directly from ClusterControl UI.clustercontrol-proxy
– ClusterControl controller proxying service for ClusterControl Operations Center.clustercontrol-clud
– ClusterControl cloud file manager module, if you would like to upload and download backups from cloud storage. It requiresclustercontrol-cloud
.s9s-tools
– ClusterControl CLI client, if you would like to manage your cluster using a command-line interface.
Installing and uninstalling ClusterControl should not bring any downtime to the managed database cluster.
Requirements
Make sure the following is ready prior to this installation:
- Verify that sudo is working properly if you are using a non-root user.
- ClusterControl node must be able to access all database nodes via passwordless SSH.
- You must have an internet connection on the ClusterControl node during the installation process. Otherwise, see Offline Installation.
Installation Steps
Steps described in the following sections should be performed on the ClusterControl node unless specified otherwise.
1. Setup ClusterControl repository – YUM Repository.
2. Setup ClusterControl CLI repository – RHEL/CentOS.
3. Disable SElinux and open required ports (or stop iptables):
$ sed -i 's|SELINUX=enforcing|SELINUX=disabled|g' /etc/selinux/config
$ setenforce 0
$ service iptables stop # RedHat/CentOS 6
$ systemctl stop firewalld # RedHat 7/8 or CentOS 7/8
4. Install required packages via package manager:
# RHEL/CentOS 8
$ yum -y install wget dmidecode hostname python36 mariadb mariadb-server httpd mod_ssl
$ alternatives --set python /usr/bin/python3
# RHEL/CentOS 7
$ yum -y install wget dmidecode python jq mariadb mariadb-server httpd mod_ssl
# RHEL/CentOS 6
$ yum -y install wget dmidecode python jq mariadb mariadb-server httpd mod_ssl
5. Install EPEL packages
# RHEL/CentOS 8
$ dnf config-manager --set-enabled powertools
$ dnf -y install epel-release epel-next-release
# RHEL/CentOS 7
$ yum -y install epel-release
# RHEL/CentOS 6
$ yum -y install epel-release
6. Install ClusterControl packages:
$ yum -y install clustercontrol2 \
clustercontrol-controller \
clustercontrol-proxy \
clustercontrol-ssh \
clustercontrol-notifications \
clustercontrol-cloud \
clustercontrol-clud
7. Start MySQL server (MariaDB for RHEL 7/8 or CentOS 7/8), enable it on boot and set a MySQL root password:
$ service mysqld start # Redhat/CentOS 6
$ systemctl start mariadb # Redhat/CentOS 7/8
$ chkconfig mysqld on # Redhat/CentOS 6
$ systemctl enable mariadb # Redhat/CentOS 7/8
$ mysqladmin -uroot password 'themysqlrootpassword'
8. Create the cmon
user and grant the cmon user:
$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
$ mysql -uroot -p -e 'FLUSH PRIVILEGES'
Replace {cmonpassword}
with respective value and {controller_ip_address}
with the valid FQDN or IP address of the ClusterControl node.
9. Generate a ClusterControl key to be used by RPC_TOKEN
:
$ uuidgen | tr -d '-'
6856d96a19d049aa8a7f4a5ba57a34740b3faf57
10. Initialize the cmon
service by running below command:
$ cmon --init \
--mysql-hostname="127.0.0.1" \
--mysql-port="3306" \
--mysql-username="cmon" \
--mysql-password="{cmonpassword}" \
--mysql-database="cmon" \
--hostname="{ClusterControl Primary IP Address}" \
--rpc-token="{ClusterControl API key as generated above}" \
--controller-id="clustercontrol"
The value of the hostname
must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. The cmon
user password taken from previous step when creating the user.
11. ClusterControl event and cloud modules require their service definition inside /etc/default/cmon
. Create the file and add the following lines:
EVENTS_CLIENT="http://127.0.0.1:9510"
CLOUD_SERVICE="http://127.0.0.1:9518"
12. Create temporary directory SSL and certificate:
$ mkdir -p /tmp/ssl
$ cat > /tmp/ssl/v3.ext << EOF
basicConstraints = CA:FALSE
#authorityKeyIdentifier=keyid,issuer
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = clientAuth, serverAuth
subjectAltName = DNS:dev.severalnines.local
EOF
13. Generate self signed certificates.
$ openssl genrsa -out /tmp/ssl/server.key 2048
$ openssl req -new -key /tmp/ssl/server.key -out /tmp/ssl/server.csr \
-addext "subjectAltName = DNS:dev.severalnines.local" \
-subj "/C=SE/ST=Stockholm/L=Stockholm/O='Severalnines AB'/OU=Severalnines/CN=*.severalnines.local/[email protected]"
$ openssl x509 -req -extfile /tmp/ssl/v3.ext -days 1825 -sha256 -in /tmp/ssl/server.csr -signkey /tmp/ssl/server.key -out /tmp/ssl/server.crt
14. Copy the certificate into default ClusterControl default directory:
$ cp -f /tmp/ssl/server.crt /etc/ssl/certs/s9server.crt
$ cp -f /tmp/ssl/server.key /etc/ssl/private/s9server.key
15. Configure header origin in the /etc/httpd/conf.d/security.conf
:
$ cat > /etc/httpd/conf.d/security.conf << EOF
Header set X-Frame-Options: "sameorigin"
EOF
16. Copy the Apache configuration file cc-frontend.conf
from /usr/share/cmon/apache/
to /etc/httpd/conf.d/cc-webapp.conf
:
$ cp /usr/share/cmon/apache/cc-frontend.conf /etc/httpd/conf.d/cc-webapp.conf
17. Replace the default URL with the correct hostname
:
$ sed -i "s|https://cc2.severalnines.local:9443.*|https://{controller_ip_address}\/|g" /etc/httpd/conf.d/cc-webapp.conf
$ sed -i "s|Listen 9443|#Listen 443|g" /etc/httpd/conf.d/cc-webapp.conf
$ sed -i "s|9443|443|g" /etc/httpd/conf.d/cc-webapp.conf
Replace {controller_ip_address}
with valid FQDN or IP address of the ClusterControl node.
18. Enable ClusterControl and Apache daemons on boot and start them:
For sysvinit:
$ chkconfig --levels 235 cmon on
$ chkconfig --levels 235 cmon-ssh on
$ chkconfig --levels 235 cmon-events on
$ chkconfig --levels 235 cmon-cloud on
$ chkconfig --levels 235 httpd on
$ service cmon start
$ service cmon-ssh start
$ service cmon-events start
$ service cmon-cloud start
$ service httpd start
For systemd:
$ systemctl enable cmon cmon-ssh cmon-events cmon-cloud httpd
$ systemctl start cmon cmon-ssh cmon-events cmon-cloud httpd
19. Configure repository s9s-tools
: (If you already followed step 2, skip this step)
$ cat > /etc/yum.repos.d/s9s-tools.repo << EOF
[s9s-tools]
name=s9s-tools (CentOS_8)
type=rpm-md
baseurl=https://repo.severalnines.com/s9s-tools/{os_codename}
gpgcheck=1
gpgkey=https://repo.severalnines.com/s9s-tools/{os_codename}/repodata/repomd.xml.key
enabled=1
EOF
Replace {os_codename}
based on the list of Operating System code below: RHEL_7
,RHEL_8
,RHEL_9
,CentOS_7
,CentOS_8
,CentOS_9
.
20. Install the s9s-tools
: (If you already followed step 2, skip this step)
$ yum install s9s-tools
21. Create the ccrpc user which is required since the ClusterControl version 1.8.2 to support new user management
$ export S9S_USER_CONFIG=$HOME/.s9s/ccrpc.conf
$ s9s user --create --new-password={generated ClusterControl API token} --generate-key --private-key-file==$HOME/.s9s/ccrpc.key --group=admins --controller=https://localhost:9501 ccrpc
$ s9s user --set --first-name=RPC --last-name=API --cmon-user=ccrpc &>/dev/null
22. Create ccsetup user for registration new account.
$ export S9S_USER_CONFIG=/tmp/ccsetup.conf
$ s9s user --create --new-password=admin --group=admins --email-address="{your_email_address}" --controller="https://localhost:9501" ccsetup
$ unset S9S_USER_CONFIG
The email {your_email_address}
is used to register new account.
23. Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the root user to connect to the managed hosts. To generate an SSH key for the root user, do:
$ whoami
root
$ ssh-keygen -t rsa # Press enter for all prompts
If you are running as sudoer, the default SSH key will be located under /home/$USER/.ssh/id_rsa
. See Operating System User.
24. Before creating or importing a database server/cluster into ClusterControl, set up passwordless SSH from the ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:
$ ssh-copy-id -i ~/.ssh/id_rsa {SSH user}@{IP address of the target node}
Replace {SSH user}
and {IP address of the target node}
with appropriate values. Repeat the command for all target hosts.
25. Open ClusterControl UI at https://ClusterControl_host/
and create the default admin password by providing a valid email address and password. You will be redirected to the ClusterControl default page.
The installation is complete and you can start to import existing or deploy a new database cluster. Please review the User Guide (GUI) for details.
The following steps should be performed on the ClusterControl node unless specified otherwise. Ensure you have the Severalnines repository and ClusterControl UI installed. Please refer to the Severalnines Repository section for details. Omit sudo if you are installing as the root user. Take note that for Ubuntu 12.04/Debian 7 and earlier, replace all occurrences of /var/www/html
with /var/www
in the following instructions.
1. Setup APT Repository.
2. Setup ClusterControl CLI repository – Debian/Ubuntu DEB Repositories.
3. If you have AppArmor running, disable it and open the required ports (or stop iptables):
$ sudo /etc/init.d/apparmor stop
$ sudo /etc/init.d/apparmor teardown
$ sudo update-rc.d -f apparmor remove
$ sudo service iptables stop
4. Install ClusterControl dependencies:
$ sudo apt-get update
$ sudo apt-get install -y python3 apache2 software-properties-common mysql-client mysql-server
$ update-alternatives --install /usr/bin/python python /usr/bin/python3 1
5. Install the ClusterControl controller package:
$ sudo apt-get install -y clustercontrol-controller \
clustercontrol2 \
clustercontrol-proxy \
clustercontrol-ssh \
clustercontrol-notifications \
clustercontrol-cloud \
clustercontrol-clud
6. Start MySQL server, enable it on boot and set a MySQL root password:
$ systemctl start mysql
$ systemctl enable mysql
$ mysqladmin -uroot password 'themysqlrootpassword'
7. Create the cmon
user and grant the cmon user:
$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"localhost" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"127.0.0.1" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
$ mysql -uroot -p -e 'GRANT ALL PRIVILEGES ON *.* TO "cmon"@"{controller_ip_address}" IDENTIFIED BY "{cmonpassword}" WITH GRANT OPTION'
$ mysql -uroot -p -e 'FLUSH PRIVILEGES'
8. Generate a ClusterControl key to be used by RPC_TOKEN
:
$ uuidgen | tr -d '-'
6856d96a19d049aa8a7f4a5ba57a34740b3faf57
9. Initialize the cmon
service by running below command:
$ cmon --init \
--mysql-hostname="127.0.0.1" \
--mysql-port="3306" \
--mysql-username="cmon" \
--mysql-password="{cmonpassword}" \
--mysql-database="cmon" \
--hostname="{ClusterControl Primary IP Address}" \
--rpc-token="{ClusterControl API key as generated above}" \
--controller-id="clustercontrol"
hostname
must be either a valid FQDN or IP address of the ClusterControl node. If the host has multiple IP addresses, pick the primary IP address of the host. The cmon
user password taken from previous step when creating the user./etc/default/cmon
for service definition. Create the file and add the following lines:
EVENTS_CLIENT="http://127.0.0.1:9510"
CLOUD_SERVICE="http://127.0.0.1:9518"
11. Create temporary directory SSL and certificate:
$ mkdir -p /tmp/ssl
$ cat > /tmp/ssl/v3.ext << EOF
basicConstraints = CA:FALSE
#authorityKeyIdentifier=keyid,issuer
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = clientAuth, serverAuth
subjectAltName = DNS:dev.severalnines.local
EOF
12. Generate self signed certificates.
$ openssl genrsa -out /tmp/ssl/server.key 2048
$ openssl req -new -key /tmp/ssl/server.key -out /tmp/ssl/server.csr \
-addext "subjectAltName = DNS:dev.severalnines.local" \
-subj "/C=SE/ST=Stockholm/L=Stockholm/O='Severalnines AB'/OU=Severalnines/CN=*.severalnines.local/[email protected]"
$ openssl x509 -req -extfile /tmp/ssl/v3.ext -days 1825 -sha256 -in /tmp/ssl/server.csr -signkey /tmp/ssl/server.key -out /tmp/ssl/server.crt
13. Copy the certificate into default ClusterControl default directory:
$ cp -f /tmp/ssl/server.crt /etc/ssl/certs/s9server.crt
$ cp -f /tmp/ssl/server.key /etc/ssl/private/s9server.key
14. Configure header origin in the /etc/apache2/conf-available/security.conf
:
$ sed -ibak "s|^#Header set X-Frame-Options: \"sameorigin\"|Header set X-Frame-Options: \"sameorigin\"|g" /etc/apache2/conf-available/security.conf
$ ln -sfn /etc/apache2/conf-available/security.conf /etc/apache2/conf-enabled/security.conf
15. Replace the default URL with the correct hostname
:
$ sed -i "s|^[ \t]*ServerName.*| ServerName {controller_ip_address}|g" /etc/apache2/sites-available/cc-frontend.conf
$ sed -i "s|https://cc2.severalnines.local:9443.*|https://{controller_ip_address}\/|g" /etc/apache2/sites-available/cc-frontend.conf
$ sed -i "s|Listen 9443|#Listen 443|g" /etc/apache2/sites-available/cc-frontend.conf
$ sed -i "s|9443|443|g" /etc/apache2/sites-available/cc-frontend.conf
Replace {controller_ip_address}
with valid FQDN or IP address of the ClusterControl node.
16. Enable required Apache modules and create a symlink to sites-enabled for default HTTPS virtual host:
$ a2enmod ssl rewrite proxy proxy_http proxy_wstunnel
$ a2ensite default-ssl
17. Restart Apache webserver to apply the changes:
$ sudo service apache2 restart
18. Enable ClusterControl on boot and start them:
For sysvinit/upstart:
$ sudo update-rc.d cmon defaults
$ sudo update-rc.d cmon-ssh defaults
$ sudo update-rc.d cmon-events defaults
$ sudo update-rc.d cmon-cloud defaults
$ service cmon start
$ service cmon-ssh start
$ service cmon-events start
$ service cmon-cloud start
For systemd:
$ systemctl enable cmon cmon-ssh cmon-events cmon-cloud
$ systemctl restart cmon cmon-ssh cmon-events cmon-cloud
19. Configure repository s9s-tools
:
$ wget -qO - http://repo.severalnines.com/s9s-tools/{os_codename}/Release.key | apt-key add -
$ echo "deb http://repo.severalnines.com/s9s-tools/{os_codename}/ ./" | tee /etc/apt/sources.list.d/s9s-tools.list
Replace {os_codename}
based on the list of Operating System code below: focal
,disco
,bionic
,xenial
,jammy
.
20. Install the s9s-tools
$ apt install s9s-tools
21. Create the ccrpc user which is required since the ClusterControl version 1.8.2 to support new user management
$ export S9S_USER_CONFIG=$HOME/.s9s/ccrpc.conf
$ s9s user --create --new-password={generated ClusterControl API token} --generate-key --private-key-file==$HOME/.s9s/ccrpc.key --group=admins --controller=https://localhost:9501 ccrpc
$ s9s user --set --first-name=RPC --last-name=API --cmon-user=ccrpc &>/dev/null
22. Create ccsetup user for registration new account.
$ export S9S_USER_CONFIG=/tmp/ccsetup.conf
$ s9s user --create --new-password=admin --group=admins --email-address="{your_email_address}" --controller="https://localhost:9501" ccsetup
$ unset S9S_USER_CONFIG
23. Generate an SSH key to be used by ClusterControl when connecting to all managed hosts. In this example, we are using the ‘root’ user to connect to the managed hosts. To generate an SSH key for the root user, do:
$ whoami
root
$ ssh-keygen -t rsa # Press enter for all prompts
If you are running as sudoer, the default SSH key will be located under /home/$USER/.ssh/id_rsa
. See Operating System User.
24. Before importing a database server/cluster into ClusterControl or deploy a new cluster, set up passwordless SSH from ClusterControl host to the database host(s). Use the following command to copy the SSH key to the target hosts:
$ ssh-copy-id -i ~/.ssh/id_rsa {SSH user}@{IP address of the target node}
Replace {SSH user} and {IP address of the target node} with appropriate values. Repeat the command for all target hosts.
25. Open ClusterControl UI at https://{ClusterControl_host}/
and create the default admin password by providing a valid email address and password. You will be redirected to the ClusterControl default page.
The installation is complete and you can start to import existing or deploy a new database cluster. Please review the User Guide (GUI) for details.