Offline Installation
The install-cc
official installation script has been updated with several helper functions to simplify the installation of ClusterControl in air-gapped (offline) environments. These functions enable the creation of .deb
or .rpm
repositories that can be imported into the target system.
The process involves first running the script on a server or virtual machine that matches the Linux distribution of the air-gapped server and has internet access. The script performs the following tasks:
- Downloads the necessary ClusterControl packages along with their dependencies, such as the MySQL server or
gnuplot
for the CMON Controller. - Builds a local file-based
.deb
or.rpm
package repository based on the Linux distribution. - Generates a tarball containing the repository directory and the repository metadata file, which can then be transferred and imported onto another server.
Take note that the following ClusterControl features will not work without an Internet connection:
- Backup → Create/Schedule Backup → Upload to Cloud – requires a connection to cloud providers.
- Integrations → Cloud Providers – requires a connection to cloud providers.
- Manage → Load Balancer – requires a connection to EPEL, ProxySQL, HAProxy, MariaDB repository.
- Manage → Upgrades – requires a connection to the provider’s repository.
- Deploy Database Cluster – requires a connection to the database provider’s repository.
Step 1: Build and export the repository
The offline repository package is configured on a temporary host that has internet access. The installer script is able to build the local repository for offline usage on RHEL/Rocky Linux/AlmaLinux 8/9, Debian 11/12 and Ubuntu 20.04/22.04/24.04.
Download the installer script on the temporary host that has an Internet access:
Build the local repository for ClusterControl by using the --airgap-repo
flag:
Example
The following is an example output of the temporary server (the staging host that has an Internet connection) on Ubuntu 24.04:
[root@staging]:/home/vagrant# ./install-cc --airgap-repo
This script will configure the Severalnines repository server for both deb and rpm packages, and install the ClusterControl Web Application and Controller.
It will install a new MySQL server or utilize an existing MySQL server on the host.
2025-05-30 05:52:23 UTC -- Only RHEL/RockyLinux/AlmaLinux 8|9, Debian 11|12, Ubuntu 20.04|22.04|24.04 LTS versions are supported
2025-05-30 05:52:23 UTC -- Waiting for locks ...
2025-05-30 05:52:23 UTC -- Update distro repos ...
141 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists...
Building dependency tree...
Reading state information...
lsb-release is already the newest version (12.0-2).
lsb-release set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 141 not upgraded.
2025-05-30 05:52:56 UTC -- DOWNLOAD_DIR=/home/vagrant/cc_packages
2025-05-30 05:52:56 UTC -- REPO_DIR=/usr/local/repo/cc_repo
2025-05-30 05:52:56 UTC -- EXPORT_FILE=cc_repo.tar.gz
2025-05-30 05:52:56 UTC -- IMPORT_REPO_DIR=/usr/local/repo/cc_repo
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
wget is already the newest version (1.21.4-1ubuntu4.1).
The following additional packages will be installed:
dirmngr gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client gpgconf gpgsm gpgv keyboxd
Suggested packages:
pinentry-gnome3 tor parcimonie xloadimage gpg-wks-server scdaemon
The following packages will be upgraded:
dirmngr gnupg gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client gpgconf gpgsm gpgv keyboxd
11 upgraded, 0 newly installed, 0 to remove and 130 not upgraded.
Need to get 2291 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gpg-wks-client amd64 2.4.4-2ubuntu17.2 [70.9 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 dirmngr amd64 2.4.4-2ubuntu17.2 [323 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnupg-utils amd64 2.4.4-2ubuntu17.2 [109 kB]
Get:4 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gpgsm amd64 2.4.4-2ubuntu17.2 [232 kB]
Get:5 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gpg-agent amd64 2.4.4-2ubuntu17.2 [227 kB]
Get:6 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gpg amd64 2.4.4-2ubuntu17.2 [565 kB]
Get:7 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gpgconf amd64 2.4.4-2ubuntu17.2 [103 kB]
Get:8 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnupg all 2.4.4-2ubuntu17.2 [359 kB]
Get:9 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 keyboxd amd64 2.4.4-2ubuntu17.2 [78.3 kB]
Get:10 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gpgv amd64 2.4.4-2ubuntu17.2 [158 kB]
Get:11 http://us.archive.ubuntu.com/ubuntu noble-updates/main amd64 gnupg-l10n all 2.4.4-2ubuntu17.2 [66.1 kB]
Fetched 2291 kB in 3s (674 kB/s)
(Reading database ... 50146 files and directories currently installed.)
Preparing to unpack .../0-gpg-wks-client_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gpg-wks-client (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../1-dirmngr_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking dirmngr (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../2-gnupg-utils_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gnupg-utils (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../3-gpgsm_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gpgsm (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../4-gpg-agent_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gpg-agent (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../5-gpg_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gpg (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../6-gpgconf_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gpgconf (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../7-gnupg_2.4.4-2ubuntu17.2_all.deb ...
Unpacking gnupg (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../8-keyboxd_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking keyboxd (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Preparing to unpack .../9-gpgv_2.4.4-2ubuntu17.2_amd64.deb ...
Unpacking gpgv (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Setting up gpgv (2.4.4-2ubuntu17.2) ...
(Reading database ... 50146 files and directories currently installed.)
Preparing to unpack .../gnupg-l10n_2.4.4-2ubuntu17.2_all.deb ...
Unpacking gnupg-l10n (2.4.4-2ubuntu17.2) over (2.4.4-2ubuntu17) ...
Setting up gnupg-l10n (2.4.4-2ubuntu17.2) ...
Setting up gpgconf (2.4.4-2ubuntu17.2) ...
Setting up gpg (2.4.4-2ubuntu17.2) ...
Setting up gnupg-utils (2.4.4-2ubuntu17.2) ...
Setting up gpg-agent (2.4.4-2ubuntu17.2) ...
Setting up gpgsm (2.4.4-2ubuntu17.2) ...
Setting up dirmngr (2.4.4-2ubuntu17.2) ...
Setting up keyboxd (2.4.4-2ubuntu17.2) ...
Setting up gnupg (2.4.4-2ubuntu17.2) ...
Setting up gpg-wks-client (2.4.4-2ubuntu17.2) ...
Processing triggers for install-info (7.1-3build2) ...
Processing triggers for man-db (2.12.0-4build2) ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
deb http://repo.severalnines.com/s9s-tools/noble/ ./
--2025-05-30 05:53:55-- http://repo.severalnines.com/severalnines-repos.asc
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 963 [application/octet-stream]
Saving to: 'STDOUT'
- 100%[=================================================>] 963 --.-KB/s in 0s
2025-05-30 05:53:55 (53.1 MB/s) - written to stdout [963/963]
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
deb [arch=amd64] http://repo.severalnines.com/deb ubuntu main
2025-05-30 05:53:59 UTC -- Added /etc/apt/sources.list.d/s9s-repo.list
2025-05-30 05:53:59 UTC -- Updating repo ...
Get:1 http://repo.severalnines.com/deb ubuntu InRelease [3973 B]
Get:2 http://repo.severalnines.com/s9s-tools/noble ./ InRelease [1876 B]
Hit:3 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu noble InRelease
Hit:5 http://us.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:6 http://us.archive.ubuntu.com/ubuntu noble-backports InRelease
Get:7 http://repo.severalnines.com/deb ubuntu/main amd64 Packages [71.9 kB]
Get:8 http://repo.severalnines.com/s9s-tools/noble ./ Packages [785 B]
Fetched 78.5 kB in 9s (8410 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
130 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: http://repo.severalnines.com/deb/dists/ubuntu/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://repo.severalnines.com/s9s-tools/noble/./InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
Hit:1 http://us.archive.ubuntu.com/ubuntu noble InRelease
Hit:2 http://repo.severalnines.com/deb ubuntu InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu noble-updates InRelease
Hit:4 http://repo.severalnines.com/s9s-tools/noble ./ InRelease
Hit:5 http://us.archive.ubuntu.com/ubuntu noble-backports InRelease
Hit:6 http://security.ubuntu.com/ubuntu noble-security InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
130 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: http://repo.severalnines.com/deb/dists/ubuntu/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://repo.severalnines.com/s9s-tools/noble/./InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
2025-05-30 05:54:34 UTC -- Downloading .deb packages and dependencies...
2025-05-30 05:54:34 UTC -- Processing clustercontrol-mcc...
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
--2025-05-30 05:54:38-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-cloud/clustercontrol-cloud_2.3.2-423_x86_64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16809906 (16M) [application/octet-stream]
Saving to: 'clustercontrol-cloud_2.3.2-423_x86_64.deb'
clustercontrol-cloud_2.3.2-42 100%[=================================================>] 16.03M 1.05MB/s in 16s
2025-05-30 05:54:55 (1002 KB/s) - 'clustercontrol-cloud_2.3.2-423_x86_64.deb' saved [16809906/16809906]
--2025-05-30 05:54:55-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-clud/clustercontrol-clud_2.3.2-423_x86_64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13383006 (13M) [application/octet-stream]
Saving to: 'clustercontrol-clud_2.3.2-423_x86_64.deb'
clustercontrol-clud_2.3.2-423 100%[=================================================>] 12.76M 1.67MB/s in 8.8s
2025-05-30 05:55:04 (1.45 MB/s) - 'clustercontrol-clud_2.3.2-423_x86_64.deb' saved [13383006/13383006]
--2025-05-30 05:55:04-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-proxy/clustercontrol-proxy_2.3.2-78_x86_64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7532004 (7.2M) [application/octet-stream]
Saving to: 'clustercontrol-proxy_2.3.2-78_x86_64.deb'
clustercontrol-proxy_2.3.2-78 100%[=================================================>] 7.18M 1.34MB/s in 6.0s
2025-05-30 05:55:11 (1.20 MB/s) - 'clustercontrol-proxy_2.3.2-78_x86_64.deb' saved [7532004/7532004]
--2025-05-30 05:55:11-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-kuber-proxy/clustercontrol-kuber-proxy-0.1.0-39-Linux.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15708012 (15M) [application/octet-stream]
Saving to: 'clustercontrol-kuber-proxy-0.1.0-39-Linux.deb'
clustercontrol-kuber-proxy-0. 100%[=================================================>] 14.98M 2.37MB/s in 8.3s
2025-05-30 05:55:19 (1.81 MB/s) - 'clustercontrol-kuber-proxy-0.1.0-39-Linux.deb' saved [15708012/15708012]
--2025-05-30 05:55:19-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-notifications/clustercontrol-notifications_2.3.2-373_x86_64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3291698 (3.1M) [application/octet-stream]
Saving to: 'clustercontrol-notifications_2.3.2-373_x86_64.deb'
clustercontrol-notifications_ 100%[=================================================>] 3.14M 1.03MB/s in 3.0s
2025-05-30 05:55:23 (1.03 MB/s) - 'clustercontrol-notifications_2.3.2-373_x86_64.deb' saved [3291698/3291698]
--2025-05-30 05:55:23-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-ssh/clustercontrol-ssh_2.3.2-213_x86_64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3156366 (3.0M) [application/octet-stream]
Saving to: 'clustercontrol-ssh_2.3.2-213_x86_64.deb'
clustercontrol-ssh_2.3.2-213_ 100%[=================================================>] 3.01M 1.14MB/s in 2.6s
2025-05-30 05:55:26 (1.14 MB/s) - 'clustercontrol-ssh_2.3.2-213_x86_64.deb' saved [3156366/3156366]
--2025-05-30 05:55:26-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-mcc/clustercontrol-mcc_2.3.2-457_amd64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6834822 (6.5M) [application/octet-stream]
Saving to: 'clustercontrol-mcc_2.3.2-457_amd64.deb'
clustercontrol-mcc_2.3.2-457_ 100%[=================================================>] 6.52M 1.17MB/s in 6.3s
2025-05-30 05:55:33 (1.03 MB/s) - 'clustercontrol-mcc_2.3.2-457_amd64.deb' saved [6834822/6834822]
2025-05-30 05:55:33 UTC -- Processing clustercontrol-controller...
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
--2025-05-30 05:55:36-- http://repo.severalnines.com/deb/pool/main/c/clustercontrol-controller/clustercontrol-controller-2.3.2-12981-x86_64.deb
Resolving repo.severalnines.com (repo.severalnines.com)... 54.247.53.237
Connecting to repo.severalnines.com (repo.severalnines.com)|54.247.53.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 42128170 (40M) [application/octet-stream]
Saving to: 'clustercontrol-controller-2.3.2-12981-x86_64.deb'
clustercontrol-controller-2.3 100%[=================================================>] 40.18M 1.37MB/s in 23s
2025-05-30 05:55:59 (1.76 MB/s) - 'clustercontrol-controller-2.3.2-12981-x86_64.deb' saved [42128170/42128170]
--2025-05-30 05:55:59-- http://us.archive.ubuntu.com/ubuntu/pool/main/f/fonts-android/fonts-droid-fallback_6.0.1r16-1.1build1_all.deb
Resolving us.archive.ubuntu.com (us.archive.ubuntu.com)... 91.189.91.82, 91.189.91.81, 91.189.91.83, ...
Connecting to us.archive.ubuntu.com (us.archive.ubuntu.com)|91.189.91.82|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1805156 (1.7M) [application/vnd.debian.binary-package]
Saving to: 'fonts-droid-fallback_6.0.1r16-1.1build1_all.deb'
fonts-droid-fallback_6.0.1r16 100%[=================================================>] 1.72M 871KB/s in 2.0s
2025-05-30 05:56:02 (871 KB/s) - 'fonts-droid-fallback_6.0.1r16-1.1build1_all.deb' saved [1805156/1805156]
--2025-05-30 05:56:02-- http://us.archive.ubuntu.com/ubuntu/pool/main/g/gcc-14/libgomp1_14.2.0-4ubuntu2~24.04_amd64.deb
Resolving us.archive.ubuntu.com (us.archive.ubuntu.com)... 91.189.91.82, 91.189.91.83, 91.189.91.81, ...
Connecting to us.archive.ubuntu.com (us.archive.ubuntu.com)|91.189.91.82|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 147940 (144K) [application/vnd.debian.binary-package]
Saving to: 'libgomp1_14.2.0-4ubuntu2~24.04_amd64.deb'
libgomp1_14.2.0-4ubuntu2~24.0 100%[=================================================>] 144.47K 142KB/s in 1.0s
2025-05-30 05:56:04 (142 KB/s) - 'libgomp1_14.2.0-4ubuntu2~24.04_amd64.deb' saved [147940/147940]
--2025-05-30 05:56:04-- http://us.archive.ubuntu.com/ubuntu/pool/main/f/fftw3/libfftw3-double3_3.3.10-1ubuntu3_amd64.deb
Resolving us.archive.ubuntu.com (us.archive.ubuntu.com)... 91.189.91.82, 91.189.91.83, 91.189.91.81, ...
Connecting to us.archive.ubuntu.com (us.archive.ubuntu.com)|91.189.91.82|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 838254 (819K) [application/vnd.debian.binary-package]
Saving to: 'libfftw3-double3_3.3.10-1ubuntu3_amd64.deb'
libfftw3-double3_3.3.10-1ubun 100%[=================================================>] 818.61K 440KB/s in 1.9s
2025-05-30 05:56:06 (440 KB/s) - 'libfftw3-double3_3.3.10-1ubuntu3_amd64.deb' saved [838254/838254]
--2025-05-30 05:56:06-- http://us.archive.ubuntu.com/ubuntu/pool/main/f/fonts-dejavu/fonts-dejavu-mono_2.37-8_all.deb
Resolving us.archive.ubuntu.com (us.archive.ubuntu.com)... 91.189.91.83, 91.189.91.81, 91.189.91.82, ...
Connecting to us.archive.ubuntu.com (us.archive.ubuntu.com)|91.189.91.83|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 502232 (490K) [application/vnd.debian.binary-package]
Saving to: 'fonts-dejavu-mono_2.37-8_all.deb'
fonts-dejavu-mono_2.37-8_all. 100%[=================================================>] 490.46K 301KB/s in 1.6s
2025-05-30 05:56:09 (301 KB/s) - 'fonts-dejavu-mono_2.37-8_all.deb' saved [502232/502232]
...
...
...
dpkg-scanpackages: info: Wrote 140 entries to output Packages file.
2025-05-30 06:04:58 UTC -- Debian repository created successfully in /usr/local/repo/cc_repo
2025-05-30 06:04:58 UTC -- Exporting repository to /usr/local/repo/cc_repo.tar.gz...
2025-05-30 06:05:29 UTC -- Repository exported!
The process of creating a local repository by the installer script involves several steps:
- Create the local repository
- Download all packages required by ClusterControl, including dependencies
- Export the repository into a compressed file before transferring it to the air-gapped host.
The tarball is called cc_repo.tar.gz
and stored at one level above the repository directory (/usr/local/repo
). The filename can be changed with the EXPORT_FILE
environment variable.
Step 2: Remove default repositories
To ensure a smooth installation process, remove the default repository files on the ClusterControl host so that the imported offline repository (built on step 1) will be used as a default repository.
The default repositories are defined under /etc/yum.repos.d
directory. Remove all repository definition files under /etc/yum.repos/d
as below:
Or, disable the repositories:
Step 3: Import the repository to the air-gapped host
Importing a repository tarball takes two steps:
- Copy the
install-cc
andcc_repo.tar.gz
file to the target host. You may use remote copy tools like SFTP, SCP, rsync or use media device like USB drive or DVD. - Run the
install-cc
script with the--import-repo
parameter.
Before running the --import-repo
command in the air-gapped environment, disable the default repository since we are going to replace the existing repositories with the one that we prepared in the previous step.
The tarball (cc_repo.tar.gz
) is located in the directory where the installer script is located. Run the repository import process with the following command:
Example
Tranfer the installer script, install-cc
and the local repository tarball, cc_repo.tar.gz
from the staging host (with Internet) to the offline ClusterControl host (without Internet):
[root@staging ~]# sftp [email protected]
[email protected]'s password:
Connected to 10.10.10.11.
sftp> put install-cc
Uploading install-cc to /home/vagrant/install-cc
install-cc-dev 100% 59KB 1.4MB/s 00:00
sftp> exit
[root@staging ~]# cd /usr/local/repo/
[root@staging ~]# sftp [email protected]
[email protected]'s password:
Connected to 10.10.10.11.
sftp> put cc_repo.tar.gz
Uploading cc_repo.tar.gz to /home/vagrant/cc_repo.tar.gz
cc_repo.tar.gz 100% 266MB 15.5MB/s 00:17
Import the repository on the ClusterControl host:
[root@clustercontrol ~]:/home/vagrant# ./install-cc --import-repo
This script will configure the Severalnines repository server for both deb and rpm packages, and install the ClusterControl Web Application and Controller.
It will install a new MySQL server or utilize an existing MySQL server on the host.
2025-05-30 06:09:51 UTC -- Only RHEL/RockyLinux/AlmaLinux 8|9, Debian 11|12, Ubuntu 20.04|22.04|24.04 LTS versions are supported
2025-05-30 06:09:51 UTC -- Waiting for locks ...
2025-05-30 06:09:51 UTC -- Update distro repos ...
141 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists...
Building dependency tree...
Reading state information...
lsb-release is already the newest version (12.0-2).
lsb-release set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 141 not upgraded.
2025-05-30 06:10:13 UTC -- Extracting /home/vagrant/cc_repo.tar.gz to /usr/local/repo/cc_repo...
2025-05-30 06:10:14 UTC -- Configuring Debian repository...
Get:1 file:/usr/local/repo/cc_repo stable InRelease
Ign:1 file:/usr/local/repo/cc_repo stable InRelease
Get:2 file:/usr/local/repo/cc_repo stable Release [626 B]
Get:2 file:/usr/local/repo/cc_repo stable Release [626 B]
Get:3 file:/usr/local/repo/cc_repo stable Release.gpg
Ign:3 file:/usr/local/repo/cc_repo stable Release.gpg
Get:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages
Ign:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Get:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages
Ign:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Get:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages
Ign:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Get:4 file:/usr/local/repo/cc_repo stable/main amd64 Packages [45.9 kB]
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Get:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Ign:5 file:/usr/local/repo/cc_repo stable/main Translation-en
Get:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Ign:6 file:/usr/local/repo/cc_repo stable/main amd64 Components
Hit:7 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:8 http://us.archive.ubuntu.com/ubuntu noble InRelease
Get:9 http://us.archive.ubuntu.com/ubuntu noble-updates InRelease [126 kB]
Hit:10 http://us.archive.ubuntu.com/ubuntu noble-backports InRelease
Fetched 126 kB in 4s (32.2 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
141 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: No Hash entry in Release file /var/lib/apt/lists/partial/_usr_local_repo_cc%5frepo_dists_stable_Release which is considered strong enough for security purposes
2025-05-30 06:10:20 UTC -- Debian repository successfully configured.
2025-05-30 06:10:20 UTC -- Repository imported!
Step 4: Install ClusterControl in offline mode
Once the local ClusterControl repository is imported, install ClusterControl without any access to the public internet. The installation process remains the same — simply run the installation script while setting the OFFLINE=1
environment variable:
OFFLINE=1 INNODB_BUFFER_POOL_SIZE={db_memory_allocation} \
SEND_DIAGNOSTICS=0 \
S9S_CMON_PASSWORD={cmon_password} \
S9S_ROOT_PASSWORD={root_password} ./install-cc
Adjust the INNODB_BUFFER_POOL_SIZE
parameter based on the hardware or VM specifications you are using and fill in the cmon_password
and root_password
according to your password policy.
To configure Severalnines repository to use HTTPS URL instead, specify USE_REPO_TLS=1
.
Example
Perform the offline ClusterControl installation on the ClusterControl host:
[root@clustercontrol ~]# OFFLINE=1 INNODB_BUFFER_POOL_SIZE=512 SEND_DIAGNOSTICS=0 S9S_CMON_PASSWORD=cmon123 S9S_ROOT_PASSWORD=root123 ./install-cc
This script will configure the Severalnines repository server for both deb and rpm packages, and install the ClusterControl Web Application and Controller.
It will install a new MySQL server or utilize an existing MySQL server on the host.
2025-05-30 06:12:33 UTC -- Only RHEL/RockyLinux/AlmaLinux 8|9, Debian 11|12, Ubuntu 20.04|22.04|24.04 LTS versions are supported
Reading package lists...
Building dependency tree...
Reading state information...
lsb-release is already the newest version (12.0-2).
0 upgraded, 0 newly installed, 0 to remove and 141 not upgraded.
2025-05-30 06:12:34 UTC -- System RAM is > 1.5G
2025-05-30 06:12:34 UTC -- Setting MySQL innodb_buffer_pool_size to 40% of system RAM
2025-05-30 06:12:34 UTC -- MySQL innodb_buffer_pool_size set to 512M
2025-05-30 06:12:34 UTC -- Run with 'INNODB_BUFFER_POOL_SIZE=512 ./install-cc' to customize the innodb buffer pool setting.
2025-05-30 06:12:34 UTC -- Severalnines would like your help improving our installation process.
2025-05-30 06:12:34 UTC -- Information such as OS, memory and install success helps us improve how we onboard our users.
2025-05-30 06:12:34 UTC -- None of the collected information identifies you personally.
2025-05-30 06:12:34 UTC -- !!
This script will configure the Severalnines repository server for both deb and rpm packages, and install the ClusterControl Web Application and Controller.
It will install a new MySQL server or utilize an existing MySQL server on the host.
2025-05-30 06:12:34 UTC -- Installing required packages ...
Reading package lists...
Building dependency tree...
Reading state information...
...
...
...
No VM guests are running outdated hypervisor (qemu) binaries on this host.
=> Waiting 60s until the CMON Controller completes its startup cycle ...
2025-05-30 06:18:22 UTC -- Create temporary cc setup user ...
User created.
2025-05-30 06:18:22 UTC -- *** Restarting the Controller process to generate CMON API rpc_tls files.
Synchronizing state of cmon-events.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable cmon-events
Created symlink /etc/systemd/system/multi-user.target.wants/cmon-events.service -> /etc/systemd/system/cmon-events.service.
Synchronizing state of cmon-ssh.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable cmon-ssh
Created symlink /etc/systemd/system/multi-user.target.wants/cmon-ssh.service -> /etc/systemd/system/cmon-ssh.service.
Synchronizing state of cmon-cloud.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable cmon-cloud
Created symlink /etc/systemd/system/multi-user.target.wants/cmon-cloud.service -> /etc/systemd/system/cmon-cloud.service.
2025-05-30 06:18:43 UTC -- Configuring ClusterControl MCC / Ops-C ...
2025-05-30 06:18:43 UTC -- Configuring ccmgr (cmon-proxy) ...
ClusterControl Manager - admin CLI v2.2
Controller 127.0.0.1:9501 registered successfully
Changing port from 19051 to 443
Changing frontend_path from /app to /var/www/html/clustercontrol-mcc
File /var/www/html/clustercontrol-mcc/config.js updated successfully
Configuration /usr/share/ccmgr/ccmgr.yaml updated successfully
Please restart 'cmon-proxy' service to apply changes
2025-05-30 06:18:43 UTC -- ClusterControl MCC / Ops-C frontend installed ...
2025-05-30 06:18:43 UTC -- ClusterControl installation completed!
Open your web browser to https://10.10.10.11 to start using ClusterControl.
If you want to uninstall ClusterControl then please follow the instructions here, https://severalnines.com/docs/administration.html#uninstall
Step 5: Post-installation
Access the ClusterControl GUI at https://<ClusterControl_host>
and create a new admin user.
Once ClusterControl is up and running, import your existing cluster or deploy a new database cluster and start managing them from one place. Make sure SSH key-based authentication is configured from the ClusterControl node to the database nodes.
-
Generate an SSH key on the ClusterControl node:
-
Setup SSH key-based authentication from ClusterControl to the database and load balancer nodes:
Repeat step 2 for every database/load balancer host that you are going to manage.