Skip to content

Connect to ClusterControl RPC API as Read-only User

In this article, we explore how to configure a dedicated read-only user for integration with the ClusterControl platform through the RPC interface. Establishing a read-only user is essential for secure and efficient access, allowing ClusterControl to retrieve necessary information without risk to data integrity or unauthorized modifications.

Use cases

Creating a read-only user is useful in the following example use cases:

  • Using a dedicated read-only user for ClusterControl prevents interference with other user accounts or privileges that might be required for application functions.
  • The read-only user can run non-invasive health checks on the database, like checking replication status, server load, and resource utilization, to provide valuable insights for proactive management.
  • The read-only user can be used to programatically integrate with your existing monitoring and alerting tools.
  • For environments with strict compliance requirements, such as financial or healthcare databases, having a read-only user enables ClusterControl integration without violating access policies.
  • The read-only user can access ClusterControl via GUI, CLI and RPC API and suitable for non-administrative personnel for various reasons such as assessing the monitored infrastructure, analyzing the current state and trend, auditing and reporting.

Creating a read-only user

Suppose there is a two-node PostgreSQL streaming replication being managed and monitored by a ClusterControl server, 192.168.20.10.

The following commands should be executed on the ClusterControl server.

  1. Add the primary IP address of the ClusterControl node inside /etc/default/cmon:

    RPC_BIND_ADDRESSES="192.168.20.10,127.0.0.1"
    
  2. Restart cmon service to make sure ClusterControl listens on port 9501 for both IP addresses defined above:

    systemctl restart cmon
    
  3. Create a read-only user called "readeruser":

    s9s user \
        --create \
        --group=readergroup \
        --create-group \
        --generate-key \
        --new-password=s3cr3tP455 \
        --email-address=[email protected] \
        --first-name=Reader \
        --last-name=User \
        --batch \
        readeruser
    

    Note

    A new private and public key will be created under ~/.s9s directory called readeruser.key and readeruser.pub.

  4. List out the user and make sure it is there:

    $ s9s user --list --long
    A ID UNAME                       GROUPS      EMAIL                REALNAME
    - 1 system                       admins      - System User
    - 2 nobody                       nobody      - Default User
    - 3 admin                        admins      - Default User
    - 4 s9s-error-reporter-vagrant   admins      - -
    A 5 dba                          admins      - -
    - 6 readeruser                   readergroup [email protected] Reader User
    
  5. Get the Cmon Directory Tree (CDT) value for your cluster (imagine this as ls -al in UNIX). In this example, our cluster name is "PostgreSQL 12", as shown below:

    $ s9s tree --list --long
    MODE        SIZE OWNER                      GROUP  NAME
    crwxrwx---+    - system                     admins PostgreSQL 12
    srwxrwxrwx     - system                     admins localhost
    drwxrwxr--  1, 0 system                     admins groups
    urwxr--r--     - admin                      admins admin
    urwxr--r--     - dba                        admins dba
    urwxr--r--     - nobody                     admins nobody
    urwxr--r--     - readeruser                 admins readeruser
    urwxr--r--     - s9s-error-reporter-vagrant admins s9s-error-reporter-vagrant
    urwxr--r--     - system                     admins system
    Total: 22 object(s) in 4 folder(s).
    
  6. Assign read permission for readeruser and readergroup for the cluster that you want. Our CDT path is "/PostgreSQL 12" (always start with a "/", similar to UNIX):

    $ s9s tree --add-acl --acl="group:readergroup:r--" "/PostgreSQL 12"
    Acl is added.
    $ s9s tree --add-acl --acl="user:readeruser:r--" "/PostgreSQL 12"
    Acl is added.
    

    Note

    You could also assign permission to a specific path like for example, "/PostgreSQL 12/192.168.20.62:5432", which allows "readeruser" to read the objects of host 192.168.20.62 under this particular cluster.

The configuration is complete. Next, we need to configure the s9s client in the reader's workstation.

CLI client configuration

The following commands should be executed on the reader's workstation (client).

  1. Install ClusterControl CLI. See Installation.

  2. Create a .s9s directory and copy the readeruser.key and readeruser.pub from the ClusterControl server into it:

    mkdir ~/.s9s
    cd ~/.s9s
    scp [email protected]:~/.s9s/readeruser.key .
    scp [email protected]:~/.s9s/readeruser.pub .
    
  3. Create an s9s configuration file at ~/.s9s/s9s.conf:

    [global]
    cmon_user=readeruser
    controller=https://192.168.20.10:9501
    
  4. At this point, you should be able to run the following commands:

    $ s9s cluster --ping
    PING Ok 4ms
    $ s9s user --whoami
    readeruser
    

At this point, the configuration is complete on the client-side. You may run the following commands to list out the cluster's objects (from the reader's workstation):

s9s cluster --cluster-id=10 --stat
s9s cluster --cluster-id=10 --stat --print-json     # if you want the output to be printed in JSON

Accessing the RPC API

If you would like to access the cluster object via HTTP, send the request to the RPCv2 API accessible on port 9501 (TLS enabled). Use one of the following two methods:

  1. Pass the username and password inside a nested authenticate method in an operation:

    curl -k 'https://192.168.20.10:9501/v2/clusters' \
    -XPOST -d \
    '{"operation": "getAllClusterInfo", 
    "authenticate": 
    {"password": "s3cr3tP455","username": "readeruser"}}'
    
  2. This is the recommended way and is used by all our applications to connect to the ClusterControl controller. Users need to maintain cookies/sessions after the first authentication and send other requests with the corresponding cookies/session. To obtain a session, use the authenticateWithPassword operation and connect to the v2/auth endpoint:

    curl -k 'https://192.168.20.10:9501/v2/auth' \
    -XPOST -d \ 
    '{"operation":"authenticateWithPassword", 
    "user_name":"readeruser", 
    "password":"s3cr3tP455"}' \
    -c cookies.jar
    

    Then, include cookies.jar with every subsequent read-only request:

    curl -k 'https://192.168.20.10:9501/v2/clusters' \
    -XPOST -d \
    '{"operation":"getallclusterinfo"}' \
    -b cookies.jar
    

If the RPC call is successful, you should receive a response with "request_status": "Ok", similar to the following output:

{
    "controller_id": "683e2b4b-038f-4cf7-93a0-4c3f95b3db70",
    "request_processed": "2024-11-07T19:33:12.701Z",
    "request_status": "Ok",
    "request_user_id": 5,
    "total": 3,
    "clusters":
    [
    ...json content...
    ],
    "debug_messages":
    [
        "RPC V2 authenticated user is 'readeruser'."
    ]
}

If the user is trying to execute a non-read-only request, for example, sending a stop node job that requires an "execute" permission:

curl -k 'https://192.168.20.10:9501/v2/jobs' \
-XPOST -d '{"cluster_id": 21, "job": {"class_name": "CmonJobInstance",
"job_spec":
{"command": "stop",
"job_dat":{"clusterid": 21,
"node":{"hostname": "192.168.20.53"}}}},
"operation": "createJobInstance"}' \
-b cookies.jar

The reply should return "request_status": "AccessDenied", similar to the following output:

{
    "controller_id": "9eb8b1d1-f9d1-439e-945e-108982812bee",
    "error_string": "Execute access to cluster 21 for user readeruser is denied.",
    "request_processed": "2021-05-04T07:01:38.342Z",
    "request_status": "AccessDenied",
    "request_user_id": 7,
    "debug_messages":
    [
        "RPC V2 authenticated user is 'readeruser'."
    ]
}

For details on the list of operations supported by ClusterControl, see ClusterControl RPC API reference manual.