Skip to content

Prometheus

Setting up dedicated Prometheus node

When deploying the first cluster, ClusterControl sets up a Prometheus instance on the ClusterControl node. That instance serves as a time-series database to keep the data collected from nodes by exporters. That Prometheus instance, by default, is reused for all other clusters that will be deployed in ClusterControl.

In some cases this setup may be not enough:

  • You may have very large cluster that is generating significant amount of metrics
  • You may have many clusters and, in total, the amount of data is big
  • You may want to increase the retention period from default 15 days to something longer, causing more data to be stored

In all those cases you may find that you could use a dedicated node for Prometheus - faster, with more disk space available for data storage. Such setup is possible in ClusterControl.

How to do this? Assume that there are two clusters: Initial situation - two clusters

They are sharing the same Prometheus server on host 10.0.2.15 and port 9090. Two clusters

The idea is for MariaDB cluster to use a dedicated Prometheus server. Dashboards tab

To do that, open MariaDB cluster and go to Dashboards tab. From there you pick re-enable and reconfigure agent based monitoring. Re-enable agentbased monitoring

More→Re-enable agent based monitoring. This opens a window to perform the reconfiguration. Configuration

There are settings to configure:

Host Selection

Determines on which host the Prometheus server will be installed. You can ether pick existing Prometheus installation or put in a new hostname or IP address for a new Prometheus installation.

Scrape Interval

Defines how frequently Prometheus collects (scrapes) metrics from the targets. Default is set to 10 seconds.

Data Retention Size

Indicates the maximum amount of storage (MB) allocated for metrics data. Default (0 MB) means no limit.

Data Directory

Specifies the location on disk where Prometheus will store metrics data. Optional field (can be left unspecified).

Node exporter In this step you can configure settings related to node exporter.

Scrape Interval

Frequency at which Prometheus retrieves system metrics from the node exporter (10 seconds by default).

Arguments

Custom parameters for Node Exporter functionality, for instance: --collector.systemd enables collecting metrics from the systemd service manager. MySQL Exporter Settings Here are settings related to MySQL exporter

Scrape Interval

How often Prometheus scrapes metrics from the MySQL exporter (10 seconds).

Arguments

Additional parameters to specify exactly which MySQL metrics should be collected, such as:

  • --collect.perf_schema.eventswaits (Wait events)

  • --collect.perf_schema.file_events (File operation events)

  • --collect.perf_schema.indexiowaits (Index I/O waits)

  • --collect.perf_schema.tableiowaits (Table I/O waits)

  • --collect.info_schema.processlist (Active MySQL processes)

  • --collect.binlog_size (Binary log size)

  • --collect.global_status (Global status variables)

  • --collect.global_variables (Global MySQL variables)

  • --collect.slave_status (Replication slave status)

and other schema-related metrics.

Once all settings are defined, verify that everything is exactly as you want it to be in the Preview: Preview

After the job is started, you can track it in the Activity center: Job running