ClusterControl CLI
The ClusterControl CLI is a command-line tool to interact, control, and manage database clusters using the ClusterControl Database Platform. By default, the installer script will automatically install and configure this package on the ClusterControl node. You can also install it on another computer or workstation to manage the database cluster remotely. This command-line project is open source and publicly available at GitHub.
ClusterControl CLI opens a new door for cluster automation where you can easily integrate it with existing deployment automation tools like Ansible, Puppet, Chef, or Salt. Not all functionalities
Overview
The command-line tool is invoked by executing a binary called s9s
. The commands are basically JSON messages being sent over to the ClusterControl Controller (CMON) RPC interface. Communication between the ClusterControl CLI (s9s
) and the cmon
process (ClusterControl Controller) is encrypted using TLS and requires port 9501 to be accessible on the controller node.
Installation
The ClusterControl CLI is automatically installed and configured if ClusterControl was installed via one of the methods described under Online Installation. You usually want to install the ClusterControl CLI manually if you want to use the CLI outside of the ClusterControl host, or if you installed ClusterControl using the manual or offline installation method.
Installer script
We have built an installer script for s9s-tools
package available at http://repo.severalnines.com/s9s-tools/install-s9s-tools.sh
. On the ClusterControl host (or any client host):
$ wget http://repo.severalnines.com/s9s-tools/install-s9s-tools.sh
$ chmod 755 install-s9s-tools.sh
$ sudo ./install-s9s-tools.sh
If you would like to install manually, see Package manager or Compile from source.
Package manager
The package list is available on the s9s-tools repository page.
The repository definition file for each distribution can be downloaded directly from:
- RHEL 7: http://repo.severalnines.com/s9s-tools/RHEL_7/s9s-tools.repo
- RHEL 8: http://repo.severalnines.com/s9s-tools/RHEL_8/s9s-tools.repo
- RHEL 9: http://repo.severalnines.com/s9s-tools/RHEL_9/s9s-tools.repo
Installation steps are straight-forward:
To install, simply do:
Compile from source
It is possible to build the ClusterControl CLI on Linux and Mac OS/X. To build from source, you may require additional packages and tools to be installed:
-
Get the source code from Github:
-
Navigate to the source code directory:
-
Install development packages such as C/C++ compiler,
autotools
andopenssl-devel
: -
Compile the source code:
-
ClusterControl CLI should be installed. Verify by running the following command:
Configuring CLI user
The first thing that must be done is to create a user that is allowed to connect to and use the controller. Communication between the s9s
command-line client and the controller (the cmon
process) is encrypted using TLS on port 9501. Public and private RSA key pairs associated with a username are used to encrypt the communication. The s9s
command-line client is responsible to set up the user and the required private and public keys.
Local access
During a ClusterControl installation, a user called "admin" is automatically created and configured inside /etc/s9s.conf
(that is the reason "admin" username is reserved) for ClusterControl CLI usage. To verify this, invoke the --whoami
flag in the user
command:
If you don't want to use the default admin
user for ClusterControl CLI, you can create another user as below:
s9s user --create \
--generate-key \
--controller="https://localhost:9501" \
--group=admins <username>
Example output
Create a user called "dba" to access CLI on the same host as the ClusterControl server:
Attention
Group "admins" has the right to perform all operations on the managed clusters. For a fine-grained access level, please create a different group and assign it with proper ACLs. See s9s-tree.
You will notice the following files will be generated under ~/.s9s/
directory:
-rw------- 1 root root 1675 Jan 31 13:29 dba.key
-rw------- 1 root root 426 Jan 31 13:29 dba.pub
-rw-r--r--. 1 root root 286 Jan 31 13:29 s9s.conf
Commonly, after running the --whoami
command, you will become the new user "dba" because ~/.s9s/s9s.conf
(if exists) will take precendence over the global /etc/s9s.conf
:
To force ClusterControl CLI to use the "admin" user (or to use other user's configuration file located elsewhere), you can prefix the command with S9S_USER_CONFIG
environment variable:
Remote access
The steps to set up the ClusterControl CLI for remote access is similar as for local access, except:
- The
s9s
command-line client must exist on the remote server. - The ClusterControl controller (cmon) must be accepting TLS connections from the remote server.
- The remote server can connect to the controller with key-based authentication (no password). This is required only during the user creation of a private/public key setup for encryption purposes.
The following shows how to prepare ClusterControl to accept remote user access and for a remote user called remote_dba
.
-
Setup the bind address for ClusterControl controller (cmon) process as follow:
Add the line:
Here we assume the public IP address of the ClusterControl host (controller) is 192.168.99.12.
Attention
Ideally, you should lock down this IP address with firewall rules, only allowing access from the remote servers you wish to access the controller from.
-
Restart the controller and check the log:
-
Verify the ClusterControl Controller is listening to the configured IP address on port 9501:
-
On the remote server/computer, we have to enable key-based authentication and create a user called
remote_dba
: -
As the current user (root or sudoer for example) in the remote server, set up a passwordless SSH to the ClusterControl host. Generate one SSH key if you don’t have one:
-
Copy the SSH public key to the ClusterControl controller host, for example, 192.168.99.12:
ssh-copy-id [email protected]
-
Create the ClusterControl CLI client user:
-
Ensure the config file located at
~/.s9s/s9s.conf
looks like this (take note the IP address of the controller may be different): -
Finally, test the connection:
The configuration for ClusterControl CLI remote user is complete.
Using the CLI
In most sections of the user guide, there are many examples on how to perform actions using ClusterControl CLI (alongside ClusterControl GUI and Terraform Provider for ClusterControl). Find the ClusterControl CLI tab to understand how the commands are constructred and manipulated by looking at the examples.
Example
The following is an example tabbed content in the user guide section which highlights the CLI examples:
.. Some step-by-step guidance ..
.. Some example commands using ClusterControl CLI .. (1)
- Look at this tab for ClusterControl CLI examples.
.. Some example definitions using Terraform HCL ..
For full list of all commands and options, see ClusterControl CLI Reference Manuals.
Getting started
To check the CLI version, use the -V
or --version
flag:
Example output
$ s9s --version
___ _ _
___ / _ \ ___ | |_ ___ ___ | |___
/ __| (_) / __|_____| __/ _ \ / _ \| / __|
\__ \\__, \__ \_____| || (_) | (_) | \__ \
|___/ /_/|___/ \__\___/ \___/|_|___/
s9s version 1.9.2024112020 (Sweden)
BUILD (1.9.2024112020-release) 2024-11-21 18:34:39+00:00
Copyright (C) 2016-2022 Severalnines AB
To list out all available commands and options, use the --help
flag:
Example output
$ s9s --help
Usage:
s9s COMMAND [OPTION...]
Where COMMAND is:
account - to manage accounts on clusters.
alarm - to manage alarms.
backup - to view, create and restore database backups.
cluster - to list and manipulate clusters.
controller - to manage Cmon controllers.
job - to view jobs.
maintenance - to view and manipulate maintenance periods.
metatype - to print metatype information.
node - to handle nodes.
process - to view processes running on nodes.
replication - to monitor and control data replication.
dbschema - to view database schemas.
report - to manage reports.
script - to manage and execute scripts.
server - to manage hardware resources.
sheet - to manage spreadsheets.
user - to manage users.
Generic options:
-c, --controller=URL The URL where the controller is found.
--config-file=PATH Specify the configuration file for the program.
--help Show help message and exit.
-P, --controller-port INT The port of the controller.
-p, --password=PASSWORD The password for the Cmon user.
--private-key-file=FILE The name of the file for authentication.
--rpc-tls Use TLS encryption to controller.
-u, --cmon-user=USERNAME The username on the Cmon system.
-v, --verbose Print more messages than normally.
-V, --version Print version information and exit.
Formatting:
--batch No colors, no human readable, pure data.
--color=always|auto|never Sets if colors should be used in the output.
--date-format=FORMAT The format of the dates printed.
-l, --long Print the detailed list.
--log-file=PATH The path where the s9s client puts its logs.
--no-header Do not print headers.
--only-ascii Do not use UTF8 characters.
--print-json Print the sent/received JSon messages.
--print-request Print the sent JSon request message.
Job related options:
--job-tags=LIST Set job tags when creating a new job.
--log Wait and monitor job messages.
--recurrence=CRONTABSTRING Timing information for recurring jobs.
--schedule=DATE&TIME Run the job at the specified time.
--timeout=SECONDS Timeout value for the entire job.
--wait Wait until the job ends.
You can also view the user manual by using the man
command:
For every command, there are help information available with the --help
flag:
Alternatively, run the user manual command and specify the s9s-<command>
as the chapter name, for example:
After a command, specify the arguments (with its respective value) or flags. In the following example, we are listing out all nodes belong to cluster ID 36 (Percona XtraDB Cluster 8.0) in a detailed format (--long
):
- Get the cluster ID from ClusterControl GUI or ClusterControl CLI using
s9s cluster --list --long
.
Tip
Use the --cluster-name
argument to identify a cluster with string, instead of --cluster-id
with integer. For example:
Example output
$ s9s node --cluster-id=36 --list --long
STAT VERSION CID CLUSTER HOST PORT COMMENT
coC- 2.3.0.11678 36 PROD - PXC 8.0 192.168.99.2 9500 Up and running.
Po-- 2.29.2 36 PROD - PXC 8.0 192.168.99.2 9090 Process 'prometheus' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.3 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.3 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.4 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.4 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.5 3306 Up and running (read-write).
Ao-- 1.0.0 36 PROD - PXC 8.0 192.168.99.5 4433 Process 'cmnd' is running.
yo-- 2.6.5 36 PROD - PXC 8.0 192.168.99.6 6032 Process 'proxysql' is running.
koM- 2.2 36 PROD - PXC 8.0 192.168.99.6 112 Process 'keepalived' is running.
yl-- 2.6.5 36 PROD - PXC 8.0 192.168.99.7 6032 Processes with name 'proxysql' has ended.
k?-- 2.2 36 PROD - PXC 8.0 192.168.99.7 112 Process 'keepalived' is running.
By default, ClusterControl CLI will authenticate as the user configured inside ~/.s9s/s9s.conf
. If it does not exist, it will use the global configuration located at /etc/s9s.conf
. To bypass these configuration files, specify the user's configuration as arguments:
# using private key to authenticate
s9s node --cluster-id=36 --list --long \
--cmon-user=dba \
--private-key-file=~/.s9s/dba.key \
--controller=https://localhost:9501 \
--rpc-tls
# using password to authenticate
s9s node --cluster-id=36 --list --long \
--cmon-user=dba \
--password='mysUp3rRsecr3tP455' \
--controller=https://localhost:9501 \
--rpc-tls
--config-file
option or S9S_USER_CONFIG
environment variables:
# using --config-file
s9s node --cluster-id=36 --list --long --config-file=/home/dba/my-custom-s9s.conf
# using prefixed environment variable
S9S_USER_CONFIG=/home/dba/my-custom-s9s.conf s9s node --cluster-id=36 --list --long
# using exported environment variable
export S9S_USER_CONFIG=/home/dba/my-custom-s9s.conf
s9s node --cluster-id=36 --list --long
Initial check
There are multiple ways to check whether ClusterControl CLI is able to connect to the controller service correctly, as show below:
-
Ping cluster ID 0 (a special cluster ID):
-
Get the current active user:
-
Get the command's exit-code:
Output and manipulation
For more verbosity on the request being sent to the controller, append -v
flag to any command:
Example output
$ s9s node --cluster-id=36 --list --long -v
Command line options processed.
Preparing to send request.
URI is '/v2/auth'
+++ Connecting to localhost:9501...
Connected.
Initiate TLS...
TLS handshake finished (version: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384).
Sending:
POST /v2/auth HTTP/1.0
Host: localhost:9501
User-Agent: s9s-tools/1.0
Connection: close
Accept: application/json
Transfer-Encoding: identity
Content-Type: application/json
Content-Length: 201
{
"operation": "authenticateWithPassword",
"password": "xxxxxxxxxxxxxxxxxxxx",
"request_created": "2025-01-31T13:06:21.100Z",
"request_id": 1,
"user_name": "admin"
}
Controller version: 2.3.0.11678
Preparing to send request.
URI is '/v2/clusters/'
+++ Connecting to localhost:9501...
Connected.
Initiate TLS...
TLS handshake finished (version: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384).
Sending:
POST /v2/clusters/ HTTP/1.0
Host: localhost:9501
User-Agent: s9s-tools/1.0
Connection: close
Accept: application/json
Transfer-Encoding: identity
Cookie: cmon-sid=8fe6911b1dc84ceca65c90b00fe005b8
Content-Type: application/json
Content-Length: 155
{
"cluster_id": 36,
"operation": "getClusterInfo",
"request_created": "2025-01-31T13:06:21.118Z",
"request_id": 2,
"with_hosts": true
}
STAT VERSION CID CLUSTER HOST PORT COMMENT
coC- 2.3.0.11678 36 PROD - PXC 8.0 192.168.99.2 9500 Up and running.
Po-- 2.29.2 36 PROD - PXC 8.0 192.168.99.2 9090 Process 'prometheus' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.3 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.3 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.4 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.4 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.5 3306 Up and running (read-write).
Ao-- 1.0.0 36 PROD - PXC 8.0 192.168.99.5 4433 Process 'cmnd' is running.
yo-- 2.6.5 36 PROD - PXC 8.0 192.168.99.6 6032 Process 'proxysql' is running.
koM- 2.2 36 PROD - PXC 8.0 192.168.99.6 112 Process 'keepalived' is running.
yl-- 2.6.5 36 PROD - PXC 8.0 192.168.99.7 6032 Processes with name 'proxysql' has ended.
k?-- 2.2 36 PROD - PXC 8.0 192.168.99.7 112 Process 'keepalived' is running.
Total: 12
Exiting with exitcode 0.
Use the --print-json
flag to get an extended output in JSON format. For example:
If a command triggers a job, you can choose to wait for the job to finish and a progress bar will be displayed (otherwise, a job will be registered and running in background):
Example output
To see and follow the triggered job messages, specify --log
(it will display similar output as ClusterControl GUI's job messages):
Example output
$ s9s node --cluster-id=36 --nodes=192.168.99.4 --restart --log
CMON version 2.3.0.11678.
Using SSH credentials from cluster.
Cluster ID is 36.
The creds name is 'ssh_cred_cluster_36_6245'.
The username is 'root'.
The keyfile is '/root/.ssh/id_rsa'.
Preparing to restart host.
192.168.99.4: Checking ssh/sudo with credentials ssh_cred_cluster_36_6245.
Saving cluster_autorecovery: true, node_autorecovery: true settings.
Setting cluster_autorecovery: false, node_autorecovery: false settings.
192.168.99.4:3306: Stopping mysqld (timeout=600, force stop after timeout=false).
192.168.99.4: Stopping MySQL service.
192.168.99.4: mysql.service - Percona XtraDB Cluster
Loaded: loaded (]8;;file://docs-pxc-prod-02/usr/lib/systemd/system/mysql.service/usr/lib/systemd/system/mysql.service]8;;; enabled; preset: disabled)
Active: active (running) since Tue 2025-02-11 10:05:31 UTC; 55s ago
Process: 92799 ExecStartPre=/usr/bin/mysql-systemd start-pre (code=exited, status=0/SUCCESS)
Process: 92826 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 92827 ExecStartPre=/bin/sh -c VAR=`bash /usr/bin/mysql-systemd galera-recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS)
Process: 93681 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 93683 ExecStartPost=/usr/bin/mysql-systemd start-post $MAINPID (code=exited, status=0/SUCCESS)
Main PID: 92862 (mysqld)
Status: "Server is operational"
Tasks: 55 (limit: 10866)
Memory: 609.5M
CPU: 4.763s
CGroup: /system.slice/mysql.service
92862 /usr/sbin/mysqld --wsrep_start_position=8fad3f53-7f17-11ef-a5d5-5f0a8f95055b:4637068
Feb 11 10:05:23 docs-pxc-prod-02 systemd[1]: Starting Percona XtraDB Cluster...
Feb 11 10:05:25 docs-pxc-prod-02 systemd[1]: mysql.service: Got notification message from PID 93296, but reception only permitted for main PID 92862
Feb 11 10:05:31 docs-pxc-prod-02 mysql-systemd[93683]: SUCCESS!
Feb 11 10:05:31 docs-pxc-prod-02 systemd[1]: Started Percona XtraDB Cluster.
192.168.99.4: Stopping the service mysqld.
...
In an output with a list of entries, the left-most column (STAT
) displays some letters representation of the object. Here is an example:
$ s9s node --cluster-id=36 --list --long
STAT VERSION CID CLUSTER HOST PORT COMMENT
coC- 2.3.0.11678 36 PROD - PXC 8.0 192.168.99.2 9500 Up and running.
Po-- 2.29.2 36 PROD - PXC 8.0 192.168.99.2 9090 Process 'prometheus' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.3 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.3 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.4 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.4 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.5 3306 Up and running (read-write).
Ao-- 1.0.0 36 PROD - PXC 8.0 192.168.99.5 4433 Process 'cmnd' is running.
yo-- 2.6.5 36 PROD - PXC 8.0 192.168.99.6 6032 Process 'proxysql' is running.
koM- 2.2 36 PROD - PXC 8.0 192.168.99.6 112 Process 'keepalived' is running.
yl-- 2.6.5 36 PROD - PXC 8.0 192.168.99.7 6032 Processes with name 'proxysql' has ended.
k?-- 2.2 36 PROD - PXC 8.0 192.168.99.7 112 Process 'keepalived' is running.
Total: 12
This information can be demystified by looking at the user manual page of the node
command, under the STAT
section:
$ man s9s-node | grep -A21 -- 'STAT '
STAT Some status information represented as individual letter. This field contains the following char‐
acters:
nodetype
This is the type of the node. It can be c for controller, g for Galera node, x for MaxScale
node, k for Keepalived node, p for PostgreSQL, m for Mongo, e for MemCached, y for ProxySql,
h for HaProxy, b for PgBouncer, B for PgBackRest, t for PBMAgent, a for Garbd, r for group
replication host, A for cmon agent, P for Prometheus, s for generic MySQL nodes, S for Redis
sentinel, R for Redis, E for Elasticsearch, and ? for unknown nodes.
hoststatus
The status of the node. It can be o for on-line, l for off-line, f for failed nodes, r for
nodes performing recovery, - for nodest that are shut down and ? for nodes in unknown state.
role This field shows the role of the node in the cluster. This can be M for master, S for Slave,
U for multi (master and slave), C for controller, V for backup verification node, A for ar‐
biter, R for backup repository host D for Elasticsearch data host c for Elasticsearch coordi‐
nator_only host and - for everything else.
maintenance
This field shows if the node is in maintenance mode. The character is M for nodes in mainte‐
nance mode and - for nodes that are not in maintenance mode.
By default, ClusterControl CLI displays a human-friendly output with colouring, header and some statistics. You can use the --batch
flag to return a plain data, which is useful for manipulation by other commands or automation script:
Example output
With --batch
, it prints data without syntax highlight, header and totals. Only pure table to be processed using filters:
$ s9s node --cluster-name="PROD - PXC 8.0" --list --long --batch
coC- 2.3.0.11678 36 PROD - PXC 8.0 192.168.99.2 9500 Up and running.
Po-- 2.29.2 36 PROD - PXC 8.0 192.168.99.2 9090 Process 'prometheus' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.3 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.3 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.4 3306 Up and running (read-write).
Ao-- - 36 PROD - PXC 8.0 192.168.99.4 4433 Process 'cmnd' is running.
goM- 8.0.37 36 PROD - PXC 8.0 192.168.99.5 3306 Up and running (read-write).
Ao-- 1.0.0 36 PROD - PXC 8.0 192.168.99.5 4433 Process 'cmnd' is running.
yo-- 2.6.5 36 PROD - PXC 8.0 192.168.99.6 6032 Process 'proxysql' is running.
koM- 2.2 36 PROD - PXC 8.0 192.168.99.6 112 Process 'keepalived' is running.
yl-- 2.6.5 36 PROD - PXC 8.0 192.168.99.7 6032 Processes with name 'proxysql' has ended.
k?-- 2.2 36 PROD - PXC 8.0 192.168.99.7 112 Process 'keepalived' is running.
To redirect the output to another command, it is recommended to do it in batch mode:
$ s9s node --cluster-name="PROD - PXC 8.0" --list --long --batch | grep ^coC | awk {'print $9'}
9500
With --print-json
, we can redirect the output to a JSON processor like jq
and perform JSON object filtering:
Combine with diff
, we can compare the current database configurations (from the configuration file) side-by-side between two PostgreSQL database nodes, 192.168.99.7 and 192.168.99.8: