Skip to content

ClusterControl GUI

Introduction

ClusterControl's Graphical User Interface (GUI) serves as a unified platform for managing database clusters. It streamlines complex database operations and ensures the reliability, performance, and security of database systems by providing a single interface to control multiple ClusterControl instances (controllers). The GUI is designed to handle large-scale environments, supporting over 1000 nodes.

The ClusterControl GUI is built upon the closely integrated clustercontrol-mcc and clustercontrol-proxy packages. This newer version of the clustercontrol-mcc web application no longer relies on an Apache server. The cmon-proxy service now serves the web application instead via a process called ccmgr, also known as ClusterControl Manager. By default, a ClusterControl installation registers the local controller.

Info

Prior to ClusterControl v2.3.2 (April 2025), ClusterControl GUI was provided by a different package called clustercontrol2. This package is still available in case users would like to use Apache as the web server (e.g, security compliance, advanced proxying and TLS, etc). However, clustercontrol2 does not have the multi-controller and Kubernetes functionalities within.

ClusterControl GUI can be accessed via https://<ClusterControl_host> (or any other port passed during the initial setup). Use one of the ClusterControl users from the existing ClusterControl installation/environment to log in:

Files and location

The clustercontrol-mcc package installs the clustercontrol-proxy package as a dependency (the web server). The web application is located at /var/www/html/clustercontrol-mcc and the configuration file is located at /var/www/html/clustercontrol-mcc/config.js. Here is an example content:

window.FEAS_ENV = {
  CMON_API_URL: '/',
  GIT_SHA: 'f457257a3',
  GIT_BRANCH: 'release-mcc-2.3.3',
  VERSION: '2.3.3',
  BUILD_NUMBER: '541',
  USER_REGISTRATION: 1,
}

The service name for the web server is called cmon-proxy and the systemd unit definition file is located at /etc/systemd/system/cmon-proxy.service. The cmon-proxy is also responsible for managing, collecting data, and monitoring the local and external ClusterControl controllers. See ClusterControl Proxy for details.

Managing multiple controllers

A single ClusterControl GUI can be used to manage multiple ClusterControl controllers. ClusterControl's horizontal scaling is achieved through the ClusterControl Operation Center (Ops-C), which enables a single ClusterControl GUI to manage multiple ClusterControl controllers. This is ideal for large database farms with thousands of nodes.

To set up Ops-C, you need to install additional ClusterControl servers, import or add clusters to them as you would with a standard ClusterControl operation. Then, activate the multi-controller GUI mode (Ops-C), allowing you to integrate these additional controllers into the same GUI for remote monitoring within a single interface.

By default, each ClusterControl GUI connects to its local controller in single-controller GUI mode. For details on activating multi-controller GUI mode (Ops-C), refer to Integration → ClusterControl Ops-C.

If multi-controller interface is enabled (default is disabled):

  • The Controllers’ Overview page features a dashboard that presents aggregated information on registered or managed CMON controllers, providing insights into the status of controllers, ongoing jobs, alarms, and backups, as well as the status of the database clusters and nodes.
  • The web user interface allows users to select the “Active CMON Controller”, which determines the information displayed throughout the application (excluding the Controllers’ Overview) and directs user actions for clusters or nodes to the selected CMON Controller.

After all the ClusterControl has been added to the ClusterControl GUI, the Operations Center can give information about the status related to the Clusters, Nodes, and Controllers in the Overview Page:

Discover the status of Clusters and Nodes in specific controllers as shown below:

Go through the node list for specific clusters in the selected controller: