Skip to content

cmon_sd

The cmon_sd service simplifies Prometheus target discovery for all nodes managed by ClusterControl by exposing an HTTP endpoint. This feature is particularly useful for external Prometheus instances not managed by ClusterControl. The service works by monitoring ClusterControl's internal objects and converting them into readily discoverable Prometheus targets.

This project is publicly hosted in Github, cmon_sd.

Install

Navigate to /usr/local/bin directory, download and extract the latest binary from here. At the moment, only amd64/x86-64/x64 architecture is supported:

cd /usr/local/bin
wget https://github.com/severalnines/cmon-sd/releases/download/v0.0.8/cmon_sd_0.0.8_linux_amd64.tar.gz
tar -xzf cmon_sd_*_linux_amd64.tar.gz
rm -Rf cmon_sd_*_linux_amd64.tar.gz     # clean up

Options

Name, shorthand Description
--port Listening port, defaults to 8080.

Usage

  1. Create a ClusterControl control user that can log in into ClusterControl. The user must have a global view of all clusters and can be a read-only user. In this example, we will create a user called cmonsd which belongs to a global read-only group called global-ro:

    1. To create a team, go to ClusterControl GUI → User management → Create user or team → Create team.

    2. In the Team name field, specify the group name: global-ro. Click Continue.

    3. Under the Users section, skip adding a user first as we will add it later. Click Continue.

    4. Under the Permissions section, toggle off the following:

      • Change controller configuration
      • Change LDAP settings
      • Deploy clusters
    5. Under the Clusters permission level dropdown, choose "View". This only allows users in this group to have read-only permission for all clusters. Click Continue and Finish.

    6. To create a user, go to ClusterControl GUI → User management → Create user or team → Create user.

    7. Under the Details section, specify the all necessary information. For username, we can specify "cmonsd". Toggle off Force change password and click Continue.

    8. Under the Team section, choose "global-ro" from the dropdown. Click Continue and Finish to create the user.

  2. Pass CMON_USERNAME and CMON_PASSWORD environment variables to cmon_sd binary:

    CMON_USERNAME='cmonsd' CMON_PASSWORD='SuperSecret$$$P455' ./cmon_sd -p 9909
    
  3. Verify the HTTP response:

    $ curl http://127.0.0.1:9909
    [
        {
            "targets": [
                "10.10.10.17:9100",
                "10.10.10.17:9011",
                "10.10.10.17:9104",
                "10.10.10.16:9100",
                "10.10.10.16:9011",
                "10.10.10.16:9104",
                "10.10.10.18:9100",
                "10.10.10.18:9011",
                "10.10.10.18:9104"
            ],
            "labels": {
                "ClusterID": "641",
                "ClusterName": "PXC57",
                "ClusterType": "GALERA",
                "cid": "641"
            }
        }
    ]
    
  4. Alternatively, you can configure it as a systemd unit. See Run as systemd.

Run as systemd

  1. To create a systemd unit file for cmon_sd service, save the following content to /etc/systemd/system/cmon_sd.service:

    [Unit]
    Description=CMON Service Discovery
    # Ensures cmon starts before this service
    After=cmon.service
    # Makes this service dependent on cmon
    Requires=cmon.service
    
    [Service]
    Type=simple
    ExecStart=/usr/local/bin/cmon_sd -p 9909
    WorkingDirectory=/usr/local/bin/
    
    # Environment variables
    Environment="CMON_USERNAME=cmonsd"
    Environment="CMON_PASSWORD=SuperSecret$$$P455"
    
    # Security and Reliability
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
  2. Apply the configuration by running:

    sudo systemctl daemon-reload
    
  3. Start the service immediately and enable it to run on boot:

    sudo systemctl start cmon_sd
    sudo systemctl enable cmon_sd
    

Sample prometheus.yml

On your Prometheus instance, configure the http_sd_configs accordingly:

---
alerting:
  alertmanagers:
  - static_configs:
    - targets:
global:
  evaluation_interval: 10s
  external_labels:
    monitor: clustercontrol
  scrape_interval: 10s
  scrape_timeout: 10s
rule_files:
scrape_configs:
- http_sd_configs:
  - url: http://10.10.10.1:9909
  job_name: clustercontrol
...

Restart Prometheus to apply the change. The Prometheus should start scraping all targets managed by ClusterControl.