1. Home
  2. Docs
  3. ClusterControl
  4. Components
  5. ClusterControl Controller
  6. Monitoring Operation

Monitoring Operation

Generally, ClusterControl performs its monitoring, alerting, and trending duties by using the following 4 ways:

  1. SSH – Host metrics collection using SSH library.
  2. Prometheus – Host and database metrics collection using Prometheus server and exporters.
  3. Database client – Database metrics collection using the CMON database client library.
  4. Advisor – Mini programs written using ClusterControl DSL and running within ClusterControl itself, for monitoring, tuning, and alerting purposes.

Starting from version 1.7.0, ClusterControl supports two methods of monitoring operation:

  1. Agentless monitoring (default).
  2. Agent-based monitoring with Prometheus.

The monitoring operation method is a non-global configuration and is bounded per cluster. This allows you to have two different database clusters configured with two different monitoring methods simultaneously. For example, Cluster A uses SSH sampling while Cluster B uses a Prometheus agent-based setup to gather host monitoring data.

Regardless of the monitoring method chosen, database and load balancer (except HAProxy) metrics are still being sampled by CMON’s database client library agentless and stored inside CMON Database for reporting (alarms, notifications, operational reports) and accurate management decisions for critical operations like failover and recovery. Having said that, with agent-based monitoring, ClusterControl does not use SSH to sample host metrics which can be very excessive in some environments.

Caution

ClusterControl allows you to switch between agentless and agent-based monitoring per cluster. However, you will lose the monitoring data each time you are doing this.

Agentless Monitoring

For host and load balancer stats collection, ClusterControl executes this task via SSH with super-user privilege. Therefore, passwordless SSH with super-user privilege is vital, to allow ClusterControl to run the necessary commands remotely with proper escalation. With this pull approach, there are a couple of advantages as compared to the agent-based monitoring method:

  • Agentless – There is no need for an agent to be installed, configured, and maintained.
  • Unifying the management and monitoring configuration – SSH can be used to pull monitoring metrics or push management jobs on the target nodes.
  • Simplify the deployment – The only requirement is proper passwordless SSH setup and that’s it. SSH is also very secure and encrypted.
  • Centralized setup – One ClusterControl server can manage multiple servers and clusters, provided it has sufficient resources.

However, there are also drawbacks to the agentless monitoring approach, a.k.a pull mechanism:

  • The monitoring data is accurate only from the ClusterControl perspective. For example, if there is a network glitch and ClusterControl loses communication with the monitored host, the sampling will be shipped until the next available cycle.
  • For high granularity monitoring, there will be network overhead due to the increased sampling rate where ClusterControl needs to establish more connections to every target host.
  • ClusterControl will keep on attempting to re-establish a connection to the target node because it has no agent to do this on its behalf.
  • Redundant data sampling if you have more than one ClusterControl server monitoring a cluster since each ClusterControl server has to pull the monitoring data for itself.

The above points are the reasons we introduced agent-based monitoring, as described in the next section.

Agent-based Monitoring

Starting from version 1.7.0, ClusterControl introduced an agent-based monitoring integration with Prometheus. Other operations like management, scaling, and deployment is still performed through an agentless approach as described in the Management and Deployment Operations. Agent-based monitoring can eliminate excessive SSH connections to the monitored hosts and offload the monitoring jobs to another dedicated monitoring system like Prometheus.

With agent-based configuration, you can use a set of new dashboards that use Prometheus as the data source and give access to its flexible query language and multi-dimensional data model with time series data identified by metric name and key/value pairs. Simply said, in this configuration, ClusterControl integrates with Prometheus to retrieve the collected monitoring data and visualize them in the ClusterControl UI, just like a GUI client for Prometheus. ClusterControl also connects to the exporter via HTTP GET and POST methods to determine the process state for process management purposes. For the list of Prometheus exporters, see Monitoring Tools.

One Prometheus data source can be shared among multiple clusters within ClusterControl. You have the option to deploy a new Prometheus server or import an existing Prometheus server, under ClusterControl → Dashboards → Enable Agent Based Monitoring.

Attention

Importing an external Prometheus host (which was not deployed by ClusterControl) is not supported at the moment due to the possibility of incompatible data structures exposed by Prometheus exporters.

Monitoring Tools

Agentless

For agentless monitoring mode, ClusterControl monitoring duty only requires an OpenSSH server package on the monitored hosts. ClusterControl uses the libssh client library to collect host metrics for the monitored hosts – CPU, memory, disk usage, network, disk IO, process, etc. OpenSSH client package is required on the ClusterControl host only for setting up passwordless SSH and debugging purposes. Other SSH implementations like Dropbear and TinySSH are not supported.

Agent-based

For agent-based monitoring mode, ClusterControl requires a Prometheus server on port 9090 to be running, and all monitored nodes to be configured with at least three exporters (depending on the node’s role):

  1. Process exporter (port 9011)
  2. Node/system metrics exporter (port 9100)
  3. Database or application exporters:

On every monitored host, ClusterControl will configure and daemonize the exporter process using systemd. It is recommended to have an Internet connection to install the necessary packages and automate the Prometheus deployment. For offline installation, the packages must be pre-downloaded into /var/cache/cmon/packages on the ClusterControl node. For the list of required packages and links, please refer to /usr/share/cmon/templates/packages.conf. Apart from the Prometheus scrape process, ClusterControl also connects to the process exporter via HTTP calls directly to determine the process state of the node. No sampling via SSH is involved in this process.

Note

With agent-based monitoring, ClusterControl depends on a working Prometheus for accurate reporting on management and monitoring data. Therefore, Prometheus and exporter processes are managed by the internal process manager thread. A non-working Prometheus will have a significant impact on the CMON process.

Since ClusterControl 1.7.3 allows multi-instance per single host (only for PostgreSQL-based clusters), ClusterControl takes a conservative approach by automatically configuring a different exporter port if there is more than one same process to monitor to avoid port conflict by incrementing the port number for every instance. Suppose you have two ProxySQL instances deployed by ClusterControl, and you would like to monitor them both via Prometheus. ClusterControl will configure the first ProxySQL’s exporter to be running on the default port, 42004 while the second ProxySQL’s exporter port will be configured with port 42005, incremented by 1.

The collector flags are configured based on the node’s role, as shown in the following table (some exporters do not use collector flags):

Exporter Collector Flags
mysqld_exporter
  • collect.info_schema.processlist
  • collect.info_schema.tables
  • collect.info_schema.innodb_metrics
  • collect.global_status
  • collect.global_variables
  • collect.slave_status
  • collect.perf_schema.tablelocks
  • collect.perf_schema.eventswaits
  • collect.perf_schema.file_events
  • collect.perf_schema.file_instances
  • collect.binlog_size
  • collect.perf_schema.tableiowaits
  • collect.perf_schema.indexiowaits
  • collect.info_schema.tablestats
node_exporter
  • arp ,bcache, bonding, conntrack, cpu, diskstats, edac, entropy, filefd, filesystem, hwmon, infiniband, ipvs, loadavg, mdadm, meminfo, netdev, netstat, nfs, nfsd, sockstat, stat, textfile, time, timex, uname, vmstat, wifi, xfs, zfs

Database Client Libraries

When gathering the database stats and metrics, regardless of the monitoring operation method, ClusterControl Controller (CMON) connects to the database server directly via database client libraries – libmysqlclient (MySQL/MariaDB and ProxySQL), libpq (PostgreSQL) and libmongoc (MongoDB). That is why it’s crucial to set up proper privileges for the ClusterControl server from the database server’s perspective. For MySQL-based clusters, ClusterControl requires database user “cmon” while for other databases, any username can be used for monitoring, as long as it is granted with super-user privileges. Most of the time, ClusterControl will set the required privileges (or use the specified database user) automatically during the cluster import or cluster deployment stage.

Load Balancers

For load balancers, ClusterControl requires the following additional tools:

  • Maxadmin on the MariaDB MaxScale server.
  • netcat and/or socat on the HAProxy server to connect to the HAProxy socket file.
  • ProxySQL requires a MySQL client on the ProxySQL server.

Agentless vs Agent-based Architecture

The following diagram summarizes both host and database monitoring processes executed by ClusterControl using libssh and database client libraries (agentless approach):

The following diagram summarizes both host and database monitoring processes executed by ClusterControl using Prometheus and database client libraries (agent-based approach):

 

Timeouts and Intervals

ClusterControl Controller (CMON) is a multi-threaded process. For agentless monitoring, the ClusterControl Controller sampling thread connects via SSH to each monitored host once and maintains a persistent connection (hence, no timeout) until the host drops or disconnects it when sampling host stats. It may establish more connections depending on the jobs assigned to the host since most of the management jobs run in their own thread. For example, cluster recovery runs on the recovery thread, Advisor execution runs on a cron thread, as well as process monitoring runs on the process collector thread.

For agent-based monitoring, the Scrape Interval and Data Retention period depend on the Prometheus settings.

ClusterControl monitoring thread performs the following sampling operations in the following interval:

Metrics Interval
MySQL query/status/variables Every second
Process collection (/proc) Every 10 seconds
Server detection Every 10 seconds
Host (/proc, /sys) Every 30 seconds (configurable via host_stats_collection_interval)
Database (PostgreSQL and MongoDB only) Every 30 seconds (configurable via db_stats_collection_interval)
Database schema Every 3 hours (configurable via db_schema_stats_collection_interval)
Load balancer Every 15 seconds (configurable via lb_stats_collection_interval)

The Advisors (imperative scripts), which can be created, compiled, tested, and scheduled directly from ClusterControl UI, under Manage → Developer Studio, can make use of SSH and database client libraries for monitoring, data processing, and alerting within ClusterControl domain with the following restrictions:

  • 5 seconds of hard time limit for SSH execution,
  • 10 seconds of default time limit for database connection, configurable via net_read_timeout, net_write_timeout, connect_timeout in the CMON configuration file,
  • 60 seconds of total script execution time limit before CMON ungracefully aborts it.

For short-interval monitoring data like MySQL queries and status, data are stored directly in the CMON database. While for long-interval monitoring data like weekly/monthly/yearly data points are aggregated every 60 seconds and stored in memory for 10 minutes. These behaviors are not configurable due to architecture design.

Was this article helpful to you? Yes No