1. Home
  2. Docs
  3. ClusterControl
  4. User Guide (GUI v2)
  5. Clusters
  6. Dashboards

Dashboards

In ClusterControl v2, only agent-based monitoring via Prometheus is supported, while agentless monitoring (monitoring via SSH) is deprecated. If you insist to use the agentless monitoring approach, please use ClusterControl v1.

You will be presented with a notification panel to enable agent-based monitoring if it has not been activated, as shown below:

You can also click on Enable Monitoring under the Clusters page for the corresponding cluster. It will present the same configuration wizard as shown in the Enable Agent-Based Monitoring section down below.

See also

To understand how ClusterControl performs monitoring jobs, see Monitoring Operations.

Enable Agent-Based Monitoring

Opens a step-by-step deployment wizard to enable agent-based monitoring.

For a new Prometheus deployment, ClusterControl will install the Prometheus server on the target host and configure exporters on all monitored hosts according to their role for that particular cluster. If you choose an existing Prometheus server, deployed by ClusterControl, it will connect to the data source and configure the Prometheus exporters accordingly.

Configuration

This section will help users to configure the Prometheus monitoring server.

Field Description
Select Host
  • Specify a host to deploy the Prometheus server.
  • For a new deployment, this can be a new external host (dedicated Prometheus server), reachable via passwordless SSH, or the ClusterControl server itself. Try to avoid using an existing server with a database role as the disk usage could be demanding, depending on the data retention policy.
  • If you already have a running Prometheus server deployed by ClusterControl, the dropdown will list out the existing instance and you can pick one from the list.
  • At the moment, ClusterControl does not support importing an existing Prometheus server deployed outside of ClusterControl.
Scrape Interval
  • The scrape interval in seconds. This is a Prometheus global configuration. See Global Configuration.
  • The lowest value should be 1. Be cautious of network latency or otherwise it may result in scrape errors. Lower interval usually leads to increased resource usage.
Data Retention
  • When to remove the old data monitoring data in days. The default is 15. See Storage.
Data Retention Size
  • The maximum number of bytes of storage blocks to retain. The oldest data will be removed first. See Storage.

Exporters

Exporter aggregates and imports data from a non-Prometheus to a Prometheus system, and acts as the monitoring agent (thus the “agent-based monitoring” term).

In the deployment wizard, the Node exporter and Process exporter will be available for all nodes. The subsequent exporters’ configuration will depend on the cluster type and role. For example, a Percona XtraDB Cluster with ProxySQL will have additional MySQL exporter and ProxySQL exporter sections. For each exporter, you may customize the scrape interval and arguments to be passed when running the agent.

For the list of supported exporters and their configurations, see Supported Exporters.

Field Description
Scrape Interval
  • The scrape interval in seconds for this particular exporter.
  • The lowest value should be 1. Be cautious of network latency or otherwise it may result in scrape errors. Lower interval usually leads to increased resource usage.
Arguments
  • List of collector flags for the exporter. Get the flags from the exporter’s documentation page. Some exporters do not use collector flags.
  • Type the flag name and value (if any) in the text field one at a time, and press Enter to add it to the list. Repeat until all flags are added before continuing to the next step.
See also

To understand how ClusterControl installs and configures the Prometheus server and all exporters, see Agent-Based Monitoring.

Supported Exporters

All monitored nodes are to be configured with at least three exporters (depending on the node’s role):

  1. Process exporter (port 9011)
  2. Node/system metrics exporter (port 9100)
  3. Database or application exporters:

On every monitored host, ClusterControl will configure and daemonize the exporter process using a program called daemon. Thus, the ClusterControl host is recommended to have an Internet connection to install the necessary packages and automate the Prometheus deployment. For offline installation, the packages must be pre-downloaded into /var/cache/cmon/packages on ClusterControl node. For the list of required packages and links, please refer to /usr/share/cmon/templates/packages.conf. Apart from the Prometheus scrape process, ClusterControl also connects to the process exporter via HTTP calls directly to determine the process state of the node. No sampling via SSH is involved in this process.

Note

With agent-based monitoring, ClusterControl depends on a working Prometheus for accurate reporting on management and monitoring data. Therefore, Prometheus and exporter processes are managed by the internal process manager thread. A non-working Prometheus will have a significant impact on the CMON process.

Since ClusterControl 1.7.3 allows multi-instance per single host (only for PostgreSQL-based clusters), ClusterControl takes a conservative approach by automatically configuring a different exporter port if there are more than one same processes to monitor to avoid port conflict by incrementing the port number for every instance. Supposed you have two ProxySQL instances deployed by ClusterControl, and you would like to monitor them both via Prometheus. ClusterControl will configure the first ProxySQL’s exporter to be running on the default port, 42004 while the second ProxySQL’s exporter port will be configured with port 42005, incremented by 1.

The collector flags are configured based on the node’s role, as shown in the following table (some exporters do not use collector flags):

Exporter Collector Flags
mysqld_exporter
  • collect.info_schema.processlist
  • collect.info_schema.tables
  • collect.info_schema.innodb_metrics
  • collect.global_status
  • collect.global_variables
  • collect.slave_status
  • collect.perf_schema.tablelocks
  • collect.perf_schema.eventswaits
  • collect.perf_schema.file_events
  • collect.perf_schema.file_instances
  • collect.binlog_size
  • collect.perf_schema.tableiowaits
  • collect.perf_schema.indexiowaits
  • collect.info_schema.tablestats
node_exporter
  • arp ,bcache, bonding, conntrack, cpu, diskstats, edac, entropy, filefd, filesystem, hwmon, infiniband, ipvs, loadavg, mdadm, meminfo, netdev, netstat, nfs, nfsd, sockstat, stat, textfile, time, timex, uname, vmstat, wifi, xfs, zfs

Monitoring Dashboards

Dashboards are composed of individual monitoring panels arranged on a grid. ClusterControl pre-configures a number of dashboards depending on the cluster type and host’s role. The following table explains them:

Cluster/Application type Dashboard Description
All clusters System Overview Provides panels of host metrics and usage for an individual host.
Cluster Overview Provides selected host and database metrics for all hosts for comparison.
MySQL/MariaDB-based clusters MySQL Server – General Provides panels of general database metrics and usage for the individual database node.
MySQL Server – Caches Provides important cache-related metrics for the individual database node.
MySQL InnoDB Metrics Provides important InnoDB-related metrics for the individual database node.
MySQL Replication MySQL Replication Provides panels related to replication for the individual database node.
Galera Cluster Galera Overview Provides cross-server Galera cluster metrics for all database nodes.
Galera Server Charts Provides panels related to Galera replication metrics for the individual database node.
ProxySQL ProxySQL Overview Provides important ProxySQL metrics for individual ProxySQL nodes.
HAProxy HAProxy Overview Provides important HAProxy metrics for an individual node.
PostgreSQL PostgreSQL Overview Provides panels of general database metrics and usage for an individual database node.
TimescaleDB TimescaleDB Overview Provides panels of general database metrics and usage for the individual database node.
MongoDB Sharded Cluster MongoDB Cluster Overview Provides panels related to all Mongos of the cluster.
MongoDB ReplicaSet/Sharded Cluster MongoDB ReplicaSet Provides panels related to a replica set for the individual database hosts.
MongoDB ReplicaSet/Sharded Cluster MongoDB Server Provides panels of general database metrics and usage for the individual database host.
Redis Redis Overview Provides important ProxySQL metrics for individual Redis nodes.
Microsoft SQL Server MSSQL Overview Provides panels related to Microsoft SQL Server metrics for individual SQL server nodes.

When clicking on the gear icon (top-right), the following functionalities are available:

Field Description
Exporters
  • Opens a pop-up to show a list of exporters enabled with their status for this cluster.
Configuration
  • Opens a pop-up to show exporters grouped by the host and Prometheus global configuration settings. Click Edit to change any Prometheus global configurations. Once they are saved, ClusterControl will trigger a re-enable agent-based monitoring job and reconfigure the exporters again. You shall not lose existing monitoring data after this job completes.
Re-enable Agent Based Monitoring
  • Reconfigures agent-based monitoring approach. You can use this feature to configure this cluster to use a different Prometheus server or change any existing Prometheus parameters, similar to Configuration feature. This feature can be used to refresh the agent-based monitoring configuration in case of misconfiguration, missing data points, or you just want to reconfigure everything like a fresh installation. You shall not lose existing monitoring data after this job completes.
Disable Agent Based Monitoring
  • Disable this monitoring operation mode and back to the agentless approach. You have the option of whether to keep the Prometheus running and continue to pull stats or stop the exporters and Prometheus altogether. Regardless of the chosen option, the Dashboards will also be disabled once the job completes.
Enable Tooltip Sync
  • Allows all other graph tooltips to follow the active cursor’s hover on any of the graphs’ datapoint. This will help you look at the right timestamp and data points on the other graphs other than the one you are hovering on.
Disable Tooltip Sync
  • Disable the tooltip syncing on the other graphs.

The monitoring panels’ section provides the following functionalities:

Field Description
Dashboard
  • A dropdown list of pre-configured dashboards.
Host
  • List of database nodes’ hostname (or IP address) and its role for the particular cluster. Only for non-global dashboards.
Range Selection
  • Opens a pop-up to configure the time range for all panels. It provides quick time ranges, refresh rates and a date-time picker at the bottom of the window.
Zoom In Time Range (+)
  • Auto-pick the next shorter time range for the graphs.
Zoom Out Time Range (-)
  • Auto-pick the next longer time range for the graphs.
Refresh Every
  • Picks the monitoring data’s refresh interval.

For more info on how ClusterControl performs monitoring jobs, see Monitoring Operations. To learn more about Prometheus monitoring systems and exporters, please refer to Prometheus Documentation.

Was this article helpful to you? Yes 1 No