1. Home
  2. Docs
  3. Knowledgebase and Tutorials
  4. ClusterControl
  5. Tutorials
  6. ClusterControl and Database Clusters Deployment in an Offline Environment

ClusterControl and Database Clusters Deployment in an Offline Environment

This article outlines how to perform firstly the offline installation of Severalnines ClusterControl and secondly deploying database clusters on hosts (virtual machines or bare-metal servers) that do not have an internet connection.

The document is divided into two parts where part one contains instructions on setting up a software repository host that will serve up a database, third-party tools, and OS software packages to database hosts. Part two contains instructions on how to set up ClusterControl on a host that does not have an internet connection. All hosts in this article are running on CentOS 7, which uses the YUM package manager.

Part one and part two can be performed independently of each other. However, the repo server must be ready prior to deploying database clusters from ClusterControl.

We will use the following naming convention for the hosts:

  • Satellite repository server (connected to the Internet, reachable by ClusterControl and database servers via local network)
  • ClusterControl server (offline)
  • Database servers (offline)

Setting up Satellite Repository Server

The satellite server is considered the offline repository server to be used by ClusterControl when deploying database servers and the corresponding tools. This server should have an internet connection since we need to pull the necessary repositories into it. Let’s call this “repo server”. This server will store and host thousands of packages so do allocate sufficient disk space to the repository directory (/var/www/html/repos) in advance.

Setting up OS, OS tools and third-party utility repositories

1. Set up a web server on the repo server to serve up packages to the yum package manager remotely:

$ yum install -y httpd
$ systemctl start httpd
$ systemctl enable httpd
$ systemctl status httpd

2. Set up software required for a remote repository server:

$ yum install -y createrepo yum-utils
$ yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

3. Set up repository directory to be served by the web server:

$ mkdir -p /var/www/html/repos/rhel

4. Perform repository syncing with the appropriate repositories (the following commands will consume time to finish):

$ reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/var/www/html/repos/rhel
$ reposync -g -l -d -m --repoid=extras --newest-only --download-metadata --download_path=/var/www/html/repos/rhel
$ reposync -g -l -d -m --repoid=updates --newest-only --download-metadata --download_path=/var/www/html/repos/rhel
$ reposync -l -d -m --repoid=epel --newest-only --download-metadata --download_path=/var/www/html/repos/rhel

Adding PostgreSQL repository to the repo server

1. Download the PostgreSQL release package from this page:

$ yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

2. Clear the repository cache and list out the repositories, make sure the newly added PostgreSQL repository is in the list:

$ yum clean all
$ yum repolist

3. Set up the repository directory to be served by the web server for PostgreSQL:

$ mkdir -p /var/www/html/repos/postgresql

4. Perform repository syncing:

$ reposync -l -d -m --repoid=pgdg-common --newest-only --download-metadata --download_path=/var/www/html/repos/postgresql
$ reposync -l -d -m --repoid=pgdg14 --newest-only --download-metadata --download_path=/var/www/html/repos/postgresql
$ reposync -l -d -m --repoid=pgdg13 --newest-only --download-metadata --download_path=/var/www/html/repos/postgresql
$ reposync -l -d -m --repoid=pgdg12 --newest-only --download-metadata --download_path=/var/www/html/repos/postgresql
$ reposync -l -d -m --repoid=pgdg11 --newest-only --download-metadata --download_path=/var/www/html/repos/postgresql
$ reposync -l -d -m --repoid=pgdg10 --newest-only --download-metadata --download_path=/var/www/html/repos/postgresql

 

Adding MongoDB repository to the repo server

The package name of the MongoDB server as seen in the yum command is yum install mongodb-org. Therefore, the /etc/yum.repos.d/mongodb-org.repo can only have one entry pointing to the appropriate version of MongoDB (one and only one of either 4.2, 4.4 or 5.0).

1. Create a repo file located at /etc/yum.repos.d/mongodb-org.repo and make sure it has the following lines (the example shows we activate MongoDB 5.0):

# cat mongodb-org.repo 
[mongodb-org-5.0-latest]
name=MongoDB lastest 5.0 Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/5.0/x86_64/
gpgcheck=0
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-5.0.asc
 
#[mongodb-org-4.4]
#name=MongoDB Repository
#baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.4/x86_64/
#gpgcheck=1
#enabled=1
#gpgkey=https://www.mongodb.org/static/pgp/server-5.0.asc
Note

Disable GPGkey checking if you don’t have an internet connection. Furthermore, only one version (i.e., either 5.0, 4.0, 4.2 or 4.4) can be supported at any given time. The reason is, that all the versions are called “mongodb-org”.

2. Clear the repository cache and list out the repositories, make sure the newly added MongoDB repository is in the list:

$ yum clean all
$ yum repolist

3. Set up the repository directory to be served by the web server for MongoDB:

$ mkdir -p /var/www/html/repos/mongodb

4. Perform repository syncing:

$ reposync -l -d -m --repoid=mongodb-org-5.0-latest --newest-only --download-metadata --download_path=/var/www/html/repos/mongodb

Adding Percona MongoDB repository to the repo server

The package name of the Percona MongoDB server as seen in the yum command is yum install percona-server-mongodb. Therefore, the /etc/yum.repos.d/mongodb-org.repo can only have one entry pointing to the appropriate version of MongoDB (one and only one of either 4.2, 4.4 or 5.0).

1. Create a repo file located at /etc/yum.repos.d/percona-mongodb.repo and make sure it has the following lines (the example shows we activate Percona MongoDB 5.0):

[psmdb-50-release-x86_64]
name = Percona Server for MongoDB 5.0 release/x86_64 YUM repository
baseurl = http://repo.percona.com/psmdb-50/yum/release/$releasever/RPMS/x86_64
enabled = 1
gpgcheck = 0
#gpgkey = file:///etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY
 
[psmdb-50-release-noarch]
name = Percona Server for MongoDB 5.0 release/noarch YUM repository
baseurl = http://repo.percona.com/psmdb-50/yum/release/$releasever/RPMS/noarch
enabled = 1
gpgcheck = 0
#gpgkey = file:///etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY
 
[psmdb-50-release-sources]
name = Percona Server for MongoDB 5.0 release/sources YUM repository
baseurl = http://repo.percona.com/psmdb-50/yum/release/$releasever/SRPMS
enabled = 0
gpgcheck = 0
#gpgkey = file:///etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY
 
[tools-release-x86_64]
name = Percona Tools release/x86_64 YUM repository
baseurl = http://repo.percona.com/tools/yum/release/$releasever/RPMS/x86_64
enabled = 1
gpgcheck = 0
#gpgkey = file:///etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY
 
[tools-release-noarch]
name = Percona Tools release/noarch YUM repository
baseurl = http://repo.percona.com/tools/yum/release/$releasever/RPMS/noarch
enabled = 1
gpgcheck = 0
#gpgkey = file:///etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY
 
[tools-release-sources]
name = Percona Tools release/sources YUM repository
baseurl = http://repo.percona.com/tools/yum/release/$releasever/SRPMS
enabled = 0
gpgcheck = 0
#gpgkey = file:///etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY
Note

Disable GPGkey checking if you don’t have an internet connection. Furthermore, only one version (i.e., either 5.0, 4.0, 4.2 or 4.4) can be supported at any given time. The reason is, that all the versions are called “percona-server-mongodb”.

2. Clear the repository cache and list out the repositories, make sure the newly added MongoDB repository is in the list:

$ yum clean all
$ yum repolist

3. Set up the repository directory to be served by the web server for Percona:

$ mkdir -p /var/www/html/repos/percona

4. Perform repository syncing:

$ reposync -l -d -m --repoid=psmdb-50-release-x86_64  --newest-only --download-metadata --download_path=/var/www/html/repos/percona
$ reposync -l -d -m --repoid=tools-release-x86_64  --newest-only --download-metadata --download_path=/var/www/html/repos/percona

Adding Oracle MySQL repository to the repo server

1. Download the appropriate Oracle MySQL community release package from this page:

$ yum install https://dev.mysql.com/get/mysql80-community-release-el7-6.noarch.rpm 

2. Clear the repository cache and list out the repositories, make sure the newly added MySQL 8 repository is in the list:

$ yum clean all
$ yum repolist

3. Set up the repository directory to be served by the web server for Oracle:

$ mkdir -p /var/www/html/repos/oracle

4. Perform repository syncing:

$ reposync -l -d -m --repoid=mysql57-community --newest-only --download-metadata --download_path=/var/www/html/repos/oracle
$ reposync -l -d -m --repoid=mysql80-community --newest-only --download-metadata --download_path=/var/www/html/repos/oracle
$ reposync -l -d -m --repoid=mysql-connectors-community --newest-only --download-metadata --download_path=/var/www/html/repos/oracle
$ reposync -l -d -m --repoid=mysql-tools-community --newest-only --download-metadata --download_path=/var/www/html/repos/oracle
$ reposync -l -d -m --repoid=mysql-cluster-7.6-community --newest-only --download-metadata --download_path=/var/www/html/repos/oracle
$ reposync -l -d -m --repoid=mysql-cluster-8.0-community --newest-only --download-metadata --download_path=/var/www/html/repos/oracle

Adding Percona MySQL repository to the repo server

1. Download the appropriate Percona MySQL release package from this page:

$ yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

2. Enable the Percona Server for MSQL 8.0 (the latest version at the time of this writing):

$ percona-release setup ps80

3. Clear the repository cache and list out the repositories, make sure the newly added Percona Server repository is in the list:

$ yum clean all
$ yum repolist

4. Set up the repository directory to be served by the web server for Percona (if not exists):

$ mkdir -p /var/www/html/repos/percona

5. Perform repository syncing:

$ reposync -l -d -m --repoid=ps-80-release-x86_64 --newest-only --download-metadata --download_path=/var/www/html/repos/percona

Adding MariaDB repository to the repo server

1. Install the MariaDB repository by using the following script, taken from this page:

$ curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash

2. Clear the repository cache and list out the repositories, make sure the newly added MariaDB repository is in the list:

$ yum clean all
$ yum repolist

3. Set up the repository directory to be served by the web server for MariaDB:

$ mkdir -p /var/www/html/repos/mariadb

4. Perform repository syncing:

$ reposync -l -d -m --repoid=mariadb-main --newest-only --download-metadata --download_path=/var/www/html/repos/mariadb
$ reposync -l -d -m --repoid=mariadb-maxscale --newest-only --download-metadata --download_path=/var/www/html/repos/mariadb
$ reposync -l -d -m --repoid=mariadb-tools --newest-only --download-metadata --download_path=/var/www/html/repos/mariadb

Creating Repository

Now we have downloaded all the necessary packages, it is time to create the repository on the repo server:

$ createrepo /var/www/html/repos/rhel
$ createrepo /var/www/html/repos/postgresql
$ createrepo /var/www/html/repos/mongodb
$ createrepo /var/www/html/repos/percona
$ createrepo /var/www/html/repos/oracle
$ createrepo /var/www/html/repos/mariadb

Just to give some heads up on the disk usage, here is how much you need for the repository directory:

$ du -sh /var/www/html/repos
38G   /var/www/html/repos/

At this point, the repo server should be ready with all the necessary packages for OS, OS tools, third-party dependencies (Perl, Python, wget, tar, socat, net-tools, etc) and databases such as MySQL, MariaDB, PostgreSQL, MongoDB (from MongoDB Org and Percona). We should see the following directory listing under /var/www/html/repos:

$ tree -d -L 2 /var/www/html/repos
.
├── mariadb
│   ├── mariadb-main
│   ├── mariadb-maxscale
│   ├── mariadb-tools
│   └── repodata
├── mongodb
│   ├── mongodb-org-5.0-latest
│   └── repodata
├── oracle
│   ├── mysql57-community
│   ├── mysql80-community
│   ├── mysql-cluster-7.6-community
│   ├── mysql-cluster-8.0-community
│   ├── mysql-connectors-community
│   ├── mysql-tools-community
│   └── repodata
├── percona
│   ├── ps-80-release-x86_64
│   ├── psmdb-50-release-x86_64
│   ├── repodata
│   └── tools-release-x86_64
├── postgresql
│   ├── pgdg13
│   ├── pgdg14
│   ├── pgdg-common
│   └── repodata
└── rhel
    ├── base
    ├── epel
    ├── repodata
    └── updates

Setting Up Database Nodes

Next, we need to set up the following things on all database nodes:

  • Configure passwordless SSH from the ClusterControl node to all database nodes.
  • Configure the package repository to be pointing to the satellite server.

Suppose there are 3 database nodes to deploy a PostgreSQL Streaming Replication, with the following host details:

  • 192.168.100.100 – Repo server
  • 192.168.100.200 – ClusterControl
  • 192.168.100.211 – PostgreSQL db1
  • 192.168.100.212 – PostgreSQL db2
  • 192.168.100.213 – PostgreSQL db3

Configure Passwordless SSH

Make sure the appropriate user (root or sudo user) can SSH from the ClusterControl node to the database nodes.

1. On the ClusterControl server, generate an SSH key to be used for passwordless SSH:

$ whoami
root
$ ssh-keygen -t rsa # press Enter on all prompts

2. Copy this SSH key to all the database nodes:

$ ssh-copy-id [email protected] # db1
$ ssh-copy-id [email protected] # db2
$ ssh-copy-id [email protected] # db3

If the database server does not support SSH password authentication, you need to copy the SSH public key located at /root/.ssh/id_rsa.pub on the ClusterControl server into /root/.ssh/authorized_keys on every database node. See Passwordless SSH for more details.

3. Verify that you should be able to execute the following command without error:

$ ssh [email protected] "ls -l /sbin"

We have now configured the passwordless SSH.

Configure OS Repository

Perform the following commands on all nodes (ClusterControl and all database nodes):

1. Create a backup directory to existing the repository definition files:

$ mkdir ~/repos.d.backup

2. Move existing repository definition files to the backup directory:

$ mv /etc/yum.repos.d/*.repo ~/repos.d.backup

3. Create a repository definition file pointing to the repo server created in Setting up Satellite Repository Server.

$ vi /etc/yum.repos.d/satellite.repo

And add the following lines:

[rhel]
name=RHEL Local Repo Server
baseurl=http://192.168.100.100/repos/rhel
enabled=1
gpgcheck=0

4. Refresh the repository list:

$ sudo yum clean all
$ sudo yum repolist # make sure only the above repository is listed

Configure Database Repository

Perform the following commands on all database nodes, based on the database type that each node wants to deploy.

1. Create a backup directory to existing the repository definition files:

$ mkdir ~/repos.d.backup

2. Move existing repository definition files to the backup directory:

$ mv /etc/yum.repos.d/*.repo ~/repos.d.backup

3. Create a repository definition file pointing to the repo server created in Setting up Satellite Repository Server.

 

For MariaDB (MariaDB Server, MariaDB Galera):

$ vi /etc/yum.repos.d/mariadb.repo

And add the following lines:

[mariadb]
name=MariaDB Local Repo Server
baseurl=http://192.168.100.100/repos/mariadb
enabled=1
gpgcheck=0

 

For MongoDB Inc (MongoDB):

$ vi /etc/yum.repos.d/mongodb.repo

And add the following lines:

[mongodb]
name=MongoDB Local Repo Server
baseurl=http://192.168.100.100/repos/mongodb
enabled=1
gpgcheck=0

 

For Percona (Percona Server, Percona XtraDB Cluster, Percona Server for MongoDB):

$ vi /etc/yum.repos.d/percona.repo

And add the following lines:

[percona]
name=Percona Local Repo Server
baseurl=http://192.168.100.100/repos/percona
enabled=1
gpgcheck=0

 

For PostgreSQL:

$ vi /etc/yum.repos.d/postgresql.repo

And add the following lines:

[postgresql]
name=PostgreSQL Local Repo Server
baseurl=http://192.168.100.100/repos/postgresql
enabled=1
gpgcheck=0

 

For Oracle (MySQL Replication, MySQL Cluster):

$ vi /etc/yum.repos.d/oracle.repo

And add the following lines:

[oracle]
name=Oracle Local Repo Server
baseurl=http://192.168.100.100/repos/oracle
enabled=1
gpgcheck=0

4. Refresh the repository list:

$ sudo yum clean all
$ sudo yum repolist # make sure only the above repository is listed

The repository configuration is now complete. We may proceed to perform ClusterControl installation and subsequently deploy our database clusters without an Internet connection.

ClusterControl Offline Installation

For offline installation, ClusterControl requires a number of dependencies that have to be installed and prepared manually:

  1. Install MySQL or MariaDB server for CMON database
  2. Download and transfer the ClusterControl packages to the ClusterControl server

Install MySQL/MariaDB

MariaDB will be used by ClusterControl to store configuration and monitoring data for its operation.

1. On the ClusterControl server, run the following commands to install MariaDB and configure the MariaDB root password:

$ yum clean all
$ yum repolist
$ yum install mariadb mariadb-server
$ systemctl enable mariadb
$ systemctl start mariadb
$ mysqladmin -uroot password yourR00tP4ssw0rd

Download ClusterControl Packages

2. Make a staging directory for the ClusterControl RPMs:

$ mkdir /tmp/s9s

3. Download and transfer ClusterControl RPMs using the satellite server, or any other server that has an Internet connection. Get the latest stable packages from the Severalnines download page:

$ mkdir /tmp/s9sRPMs
$ cd /tmp/s9sRPMs
$ wget https://severalnines.com/downloads/cmon/clustercontrol-1.9.4-8386-x86_64.rpm
$ wget https://severalnines.com/downloads/cmon/clustercontrol-cloud-1.9.4-353-x86_64.rpm
$ wget https://severalnines.com/downloads/cmon/clustercontrol-clud-1.9.4-353-x86_64.rpm
$ wget https://severalnines.com/downloads/cmon/clustercontrol-controller-1.9.4-5638-x86_64.rpm
$ wget https://severalnines.com/downloads/cmon/clustercontrol-notifications-1.9.4-312-x86_64.rpm
$ wget https://severalnines.com/downloads/cmon/clustercontrol-ssh-1.9.4-127-x86_64.rpm
$ wget https://repo.severalnines.com/s9s-tools/CentOS_7/x86_64/s9s-tools-1.9-45.1.x86_64.rpm

4. Copy the packages to the ClusterControl host, 192.168.100.200:

$ scp *.rpm [email protected]:/tmp/s9s/

5. Install ClusterControl

$ cd /tmp/s9s
$ yum localinstall *.rpm

6. Run the installer script to finish configuring ClusterControl and all of its components (we prefixed the command with a variable S9S_ADMIN_EMAIL to be passed to the script to simplify the installation):

$ S9S_ADMIN_EMAIL="[email protected]" /var/www/html/clustercontrol/app/tools/setup-cc.sh

Answer all the questions to complete the installation. Specify the MySQL/MariaDB root password when asked, as configured under the Install MySQL/MariaDB section.

You should see the following lines:

=> ClusterControl installation completed!
Open your web browser to http://192.168.100.200/clustercontrol and
enter an email address and new password for the default Admin User.

Determining network interfaces. This may take a couple of minutes. Do NOT press any key.
Public/external IP => http://;; connection timed out; no servers could be reached/clustercontrol
Installation successful. If you want to uninstall ClusterControl then run setup-cc.sh --uninstall.

You may ignore the error because the installer script was trying to reach out to the internet for reverse lookup which is expected. Open the ClusterControl UI and register a super admin user to start deploying database clusters.

Offline Database Deployment using ClusterControl

Once you are logged in to ClusterControl UI, you may proceed to deploy a database cluster by going to ClusterControl Deploy and filling up all necessary details. One thing in particular for this kind of environment, you have to choose “Do Not Setup Vendor Repositories” under the Repository dropdown, since we already preconfigured the repository definition as shown in the Configure Database Repository section:

 

If you would like to deploy a cluster by using CluserControl CLI for the deployment, do not forget to specify --use-internal-repos flag, as shown below:

$ s9s cluster --create \
        --cluster-type=postgresql \
        --nodes="192.168.100.211?master;192.168.100.212?slave;192.168.100.213?slave;" \
        --db-admin="postgres" \
        --db-admin-passwd="mySuperStongP455w0rd" \
        --cluster-name=ft_replication_23986 \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --provider-version=13 \
        --use-internal-repos \
        --log

That’s it. The database deployment should be able to use the satellite repository server for an offline installation.

Setting up Prometheus exporters

For the offline installation of Prometheus and its exporters, you need to download all the required packages first and put the packages in the ClusterControl node (put in under /var/cache/cmon/packages).

The URL of the packages can be found in /usr/share/cmon/templates/packages.conf as shown below:

$ cat /usr/share/cmon/templates/packages.conf
#
# ClusterControl central URL database for
# 3rd party packages installed directly
# (not using APT/YUM repos)
#

[maxscale]
# clustercontrol will substitute the following keywords:
# @VENDOR@  : debian/ubuntu...
# @RELEASE@ : 5,6,7,xenial,wily...
url_deb="https://downloads.mariadb.com/MaxScale/2.5.7/packages/@VENDOR@/@RELEASE@/maxscale-2.5.7-1.@VENDOR@.@[email protected]_64.deb"
url_rpm="https://downloads.mariadb.com/files/MaxScale/2.5.7/packages/rhel/@RELEASE@/maxscale-2.5.7-1.rhel.@[email protected]_64.rpm"

[haproxy]
url_source="http://www.haproxy.org/download/1.8/src/haproxy-1.8.9.tar.gz"

[keepalived]
url_source="http://www.keepalived.org/software/keepalived-1.2.24.tar.gz"

[epel]
# To be substituted:
# @RELEASE@ : 6,7,8
url_rpm="http://dl.fedoraproject.org/pub/epel/epel-release-latest-@[email protected]"

[prometheus]
# requiredVersion: if such or greater version found
# clustercontrol will use the existing installed package
requiredVersion=2.29
url="https://github.com/prometheus/prometheus/releases/download/v2.29.2/prometheus-2.29.2.linux-amd64.tar.gz"

[haproxy_exporter]
requiredVersion=0.9
url="https://github.com/prometheus/haproxy_exporter/releases/download/v0.9.0/haproxy_exporter-0.9.0.linux-amd64.tar.gz"

[node_exporter]
requiredVersion=1.0.1
url="https://github.com/prometheus/node_exporter/releases/download/v1.0.1/node_exporter-1.0.1.linux-amd64.tar.gz"

[mysqld_exporter]
requiredVersion=0.13
url="https://github.com/prometheus/mysqld_exporter/releases/download/v0.13.0/mysqld_exporter-0.13.0.linux-amd64.tar.gz"

[postgres_exporter]
requiredVersion=0.4.7
url="https://github.com/wrouesnel/postgres_exporter/releases/download/v0.4.7/postgres_exporter_v0.4.7_linux-amd64.tar.gz"

[pgbouncer_exporter]
requiredVersion=0.4.0
url="https://github.com/prometheus-community/pgbouncer_exporter/releases/download/v0.4.0/pgbouncer_exporter-0.4.0.linux-amd64.tar.gz"

[proxysql_exporter]
requiredVersion=1.1
url="https://github.com/percona/proxysql_exporter/releases/download/v1.1.0/proxysql_exporter-1.1.0.linux-amd64.tar.gz"

[process_exporter]
requiredVersion=0.10.10
url="https://github.com/kedazo/process_exporter/releases/download/0.10.10/process_exporter-0.10.10.linux-amd64.tar.gz"

[mongodb_exporter]
requiredVersion=0.11.0
url="https://github.com/kedazo/mongodb_exporter/releases/download/v0.11.0/mongodb_exporter-v0.11.0.linux-amd64.tar.gz"

[redis_exporter]
requiredVersion=1.15.0
url="https://github.com/oliver006/redis_exporter/releases/download/v1.15.0/redis_exporter-v1.15.0.linux-amd64.tar.gz"

## for MSSQL
[mssql_exporter]
requiredVersion=0.5.4
url="https://github.com/severalnines/mssql_exporter/releases/download/0.5.4/mssql_exporter-0.5.4.linux-amd64.tar.gz"

[daemon]
url_rpm="http://libslack.org/daemon/download/daemon-0.6.4-1.x86_64.rpm"

After the dependencies exist on the server, you may proceed to enable Prometheus agent-based monitoring by going to ClusterControl → Dashboards → Enable Agent-based Monitoring.

Was this article helpful to you? Yes No