Skip to content

ClusterControl CLI

Also known as s9s-tools, this optional package is introduced in ClusterControl version 1.4.1, which contains a binary called s9s. It is a command-line tool to interact, control, and manage database clusters using the ClusterControl Database Platform. Starting from version 1.4.1, the installer script will automatically install this package on the ClusterControl node. You can also install it on another computer or workstation to manage the database cluster remotely. Communication between this client and the CMON controller is encrypted and secure through TLS. This command-line project is open source and publicly available at GitHub.

The command-line tool is invoked by executing a binary called s9s. The commands are basically JSON messages being sent over to the ClusterControl Controller (CMON) RPC interface. Communication between the s9s (the command-line tool) and the cmon process (ClusterControl Controller) is encrypted using TLS and requires port 9501 to be opened on the controller and the client host.

Following is the list of supported commands and options for ClusterControl CLI. You can also get the same information by accessing the manual page or using the --help flag in the terminal:

$ man s9s
$ s9s --help

s9s-account

Manage user accounts on clusters. The "account" term in this section can be treated as a database user account of a managed database server/cluster.

Usage

s9s account {command} {options}

Commands

Name, shorthand Description
--create Creates a new account on the cluster. Note that the account is an account of the cluster and not a user of the Cmon system.
--delete Removes an account.
--list, -L Lists the accounts on the cluster.
--grant Grant privileges for an account on one or more databases.

Options

Name, shorthand Description
--account=USERNAME[:PASSWORD][@HOSTNAME] The account to be used or created on the cluster. The command-line option argument may contain a username, a password for the user and a hostname identifying the host from where the user may log in. The s9s command-line tool will handle the command line option argument as an URL encoded string, so if the password for example contains an @ character, it should be encoded as %40. URL encoded parts are supported anywhere in the string, usernames and passwords (and even hostnames) may also have special characters.
-l, --long This option will change its format to show more details.
--private Create a secure, more restricted account on the cluster. The actual interpretation of this flag depends on the controller, the current version is restricting access to the ProxySQL servers. The account that is created with the --private option will not be imported into the ProxySQL to have access through the ProxySQL server immediately after they are created on the cluster.
--privileges=EXPRESSION Privileges to be granted to a user account on the server. See Privilege Expression.
--with-database Creates a database for the new account while creating a new user account on the cluster. The name of the database will be the same as the name of the account and all access rights will be granted for the account to use the database.

Privilege Expression

The privileges are specified using a simple language that is interpreted by the CMON Controller. The language is specified as follows:

expression: specification[;...]
specification: [object[,...]:]privilege[,...]
object: {
  *
  | *.*
  | database_name.*
  | database_name.table_name
  | database_name
}

Note that an object name on itself is a database name (and not a table name) and multiple objects can be enumerated by using the, as a separator. It is also important that multiple specifications can be enumerated using the semicolon (;) as a separator.

The expression MyDb:INSERT,UPDATE;Other:SELECT for example, defines INSERT and UPDATE privileges on the MyDb database and SELECT privilege on the Other database. The expression INSERT, UPDATE on the other hand, would specify INSERT and UPDATE privileges on all databases and all tables.

Examples

  • Create a new MySQL user account myuser with password secr3tP4ss, and allow it to have ALL PRIVILEGES on database shop_db while SELECT on the table account_db.payments:

    $ s9s account \
      --create \
      --cluster-id=1 \
      --account="myuser:[email protected]" \
      --privileges="shop_db.*:ALL;account_db.payments:SELECT"
    
  • Create a new PostgreSQL user account called mydbuser, and allowed the host in the network subnet 192.168.0.0/24 to access the database mydbshop:

    $ s9s account --create \
      --cluster-id=50 \
      --account='mydbuser:k#[email protected]/24' \
      --privileges="mydbshop.*:ALL"
    
  • Delete a database user called joe:

    $ s9s account \
      --delete \
      --cluster-id=1 \
      --account="joe"
    
  • Lists the accounts on the cluster.

    $ s9s account \
      --list \
      --long \
      --cluster-id=1
    

s9s-alarm

Manage alarms.

Usage

s9s alarm {command} {options}

Commands

Name, shorthand Description
−−delete Sets the alarm to be ignored. This will not in fact delete the alarm, but it will make the alarm to disappear from the active alarms list, hence the name of the option.
−−list Lists the active alarms.
−−stat Prints a simple list of the number of alarms.

Options

Name, shorthand Description
−−cluster-id=ID, -i The ID of the cluster to manipulate.
−−cluster-name=NAME, -n Sets the cluster name. If the operation creates a new cluster this will be the name of the new cluster.
−−alarm-id=ID The ID of the alarm to manipulate.

Examples

  • List out all alarms generated by ClusterControl for a database cluster named "PostgreSQL Cluster":

    $ s9s alarm --cluster-name="PostgreSQL Cluster" --list
    
  • Delete an alarm:

    $ s9s alarm --delete --alarm-id=1015
    
  • Print a simple list of the number of alarms:

    $ s9s alarm --stat
    

    Example output

    1,0,0
    

    The list will contain three numbers for every line. The first is the cluster ID, the second is the number of critical alarms and the third is the warning level alarms for the given cluster.

s9s-backup

View and create database backups. Three backup methods are supported:

  • mysqldump
  • xtrabackup (full)
  • xtrabackup (incremental)
  • mariabackup (full)
  • mariabackup (incremental)
  • mongodump
  • pg_dumpall

The s9s client also needs to know:

  • The cluster ID (or cluster name) of the cluster to backup.
  • The node to backup.
  • The databases that should be included (default all databases).

By default, the backups will be stored on the controller node. If you wish to store the backup on the data node, you can set the flag --on-node.

Note

If you are using Percona Xtrabackup, an incremental backup requires that there is already a full backup made of the same databases (all or individually specified). Otherwise, the incremental backup will be upgraded to a full backup.

Usage

s9s backup {command} {options}

Commands

Name, shorthand Description
−−create Creates a new backup. This command-line option will initiate a new job that will create a new backup.
−−create-schedule Creates a backup schedule, a backup that is repeated. Please note that there are two ways to create a repeated backup. A job that creates a backup can be scheduled and repeated and with this option a backup schedule can be created to repeat the creation of a backup.
−−delete Deletes an existing backup.
−−delete-old Initiates a job that checks for expired backups and removes them from the system.
−−delete-all Delete all backups (filtering by some property as snapshot repository name). This command line option will initiate a job that checks for all backups and removes them from the system.db_cluster_id property is used as it could be that no cluster exists but there is data on DB. With the forced option, it will delete a backup record on the DB even when backup deletion failed.
−−delete-after-upload Creates a job to generate a backup only stored in the cloud, as after creating a backup on the storage location filesystem, the copy will be deleted after uploading it to the cloud. This option has to be set to indicate the deletion of the backup local copy after uploading it to the cloud.
−−delete-schedules Delete the backup schedule specified on the job id.
−−list Lists the backups. When listing the backups with the --long option, more detailed columns are listed. See Backup List.
−−list-databases Lists the backups in database view format. This format is designed to show the archived databases in the backups.
−−list-files Lists the backups in file view format. This format is designed to show the archive files of the backups.
−−list-schedules Lists the backup schedules.
−−restore Restores an existing backup.
−−restore-cluster-info Restores the information the controller has about a cluster from a previously created archive file.
−−restore-controller Restores the entire controller from a previously created tarball (created by using the  --save-controller option).
−−pitr-stop-time Timestamp specification for doing point-in-time backup recovery.
--psql-immediate By default PostgreSQL when started in recovery mode after backup restoration applies the available complete WAL stream as well to provide the most up-to-date state of the database. By this option, we can ask PostgreSQL not to apply WAL or at least as much of it as it is a must for database consistency and to finish recovery as soon as possible.
−−save-cluster-info Saves information about one cluster.
−−save-controller Saves the entire controller into a file.
−−verify Creates a job to verify a backup. When this main option is used the --backup-id the option has to be used to identify a backup and the --test-server is also necessary to provide a server where the backup will be tested
−−cloud-only Creates a job to generate a backup streaming it directly to the cloud, without intermediate files --cloud-only option has to be set to indicate direct streaming of backup to the cloud container
−−create-snapshot-repository Creates a job to create a snapshot repository on elasticsearch cluster. When this main option is used the --cluster-id option has to be used to identify the cluster, the --snapshot-repository-type defines the allowed type (example: “s3”) --snapshot-repository to specify the repository name, --credential-id to specify the cloud credentials to use, --s3-bucket is also necessary to provide s3 bucket to use for this repository. --s3-region is also necessary to provide s3 region to use for this repository.
−−list-snapshot-repository Creates a job to list the snapshot repositories on an elasticsearch cluster. When this main option is used the --cluster-id option has to be used to identify the cluster.
−−delete-snapshot-repository Creates a job to delete a snapshot repository on elasticsearch cluster. When this main option is used the --cluster-id option has to be used to identify the cluster, the --snapshot-repository to specify the repository name of the repository to be deleted.

Options

Name, shorthand Description
−−backup-directory=DIR The directory where the backup is placed.
−−backup-format[=FORMATSTRING] The string that controls the format of the printed information about the backups. See Backup Format.
−−backup-id=ID The ID of the backup.
−−backup-method=METHOD Controls what backup software is going to be used to create the backup. The controller currently supports the following methods: ndb, mysqldump, xtrabackupfull,  xtrabackupincr, mariabackupfull,  mariabackupincr, mongodump, pgdump, pg_basebackup, mysqlpump.
−−backup-password=PASSWORD The password for the SQL account that will create the backup. This command-line option is not mandatory.
−−backup-retention=DAYS Controls a custom retention period for the backup, otherwise, the default global setting will be used. Specifying a positive number value here can control how long (in days) the taken backups will be preserved, -1 has a special meaning, it means the backup will be kept forever, while value 0 is the default, means prefer the global setting (configurable on UI).
−−backup-user=USERNAME The username for the SQL account that will create the backup.
−−cloud-retention=DAYS Retention used when the backup is on a cloud.
−−cluster-id=ID The ID of the cluster.
−−compression-level Compression level (used threads) to apply on the backup compression process. Value must be between1 and 9.
−−databases=LIST A comma-separated list of database names. This argument controls which databases are going to be archived into the backup file. By default, all the databases are going to be archived.
−−encrypt-backup When this option is specified ClusterControl will attempt to encrypt the backup files using AES-256 encryption (the key will be auto-generated if not exists yet and stored in a cluster configuration file).
−−full-path Print the full path of the files.
−−memory=MEGABYTES Controls how much memory the archiver process should use while restoring an archive. Currently, only the xtrabackup supports this option.
−−no-compression Do not compress the archive file.
−−nodes=NODELIST The list of nodes involved in the backup. See Node List.
−−on-node Do not copy the created archive file to the controller, store it on the node where it was created.
−−on-controller Stream and store the created backup files on the controller.
−−parallelism=N Controls how many threads are used while creating a backup. Please note that not all the backup methods support multi-thread operations.
−−pitr-compatible Creates PITR-compatible backup.
--pitr-stop-time Timestamp specification for doing point in time backup recovery.
−−recurrence=STRING Schedule time and frequency in cron format.
−−safety-copies=N Controls how many safety backups should be kept while deleting old backups. This command-line option can be used together with the --delete-old option.
−−subdirectory=MARKUPSTRING Sets the name of the subdirectory that holds the newly created backup files. The command-line option argument is considered to be a subpath that may contain the field specifiers using the usual %X format. See Backup Subdirectory Variables.
−−temp-dir-path=DIR By default, s9s backup creates temporary backup files to /var/tmp/cmon-% path. Specify this option with your desired path if you want to target to another location.
−−keep-temp-dir Specify this option if you want to retain your archive files from the temporary directory.
−−test-server=HOSTNAME Use the given server to verify the backup. If this option is provided while creating a new backup after the backup is created a new job is going to be created to verify the backup. During the verification, the SQL software will be installed on the test server and the backup will be restored on this server. The verification job will be successful if the backup is successfully restored.
−−title=STRING A short human-readable string that helps the user to identify the backup later.
−−to-individual-files Archive every database into individual files. Currently, only the mysqldump backup method supports this option.
−−use-pigz Use the pigz program to compress the archive.

Backup Subdirectory Variables

Variable Description
B The date and time when the backup creation was beginning.
H The name of the backup host, the host that created the backup.
i The numerical ID of the cluster.
I The numerical ID of the backup.
J The numerical ID of the job that created the backup.
M The backup method (e.g. “mysqldump”).
O The name of the user who initiated the backup job.
S The name of the storage host, the host that stores the backup files.
% The percent sign itself. Use two percent signs, %% the same way the standard printf() function interprets it as a one percent sign.

Backup List

Column Description
ID The numerical ID of the backup.
PI The numerical ID of the parent backup if there is a parent backup for the given entry.
CID The numerical ID of the cluster to which the backup belongs.
V The verification status. Here V means the backup is verified, and - which means the backup is not verified.
I The flag showing if the backup is incremental or not. Here F means the backup is a full backup, I means the backup is incremental, - means the backup contains no incremental or full backup files (because for example, the backup failed) and B means the backup contains both full and incremental backups files (which is impossible).
STATE The state of the backup. Here “COMPLETED” means the backup is completed, “FAILED” means the backup has failed, and “RUNNING” means the backup is being created.
OWNER The name of the Cmon user that owns the backup.
HOSTNAME The name of the host where the backup was created.
CREATED The date and time when the backup was created.
SIZE The total size of the created backup files.
TITLE The name or title of the backup. This is a human-readable string that helps identify the backup.

Backup Format

When the option --backup-format is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the backups.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 characters wide and the + or - sign will always be printed with the number. The properties of the backup are encoded by letters. The%16H for example the letterH encodes the hostname. Standard \ notation is also available, \n for example, encodes a new-line character.

The s9s-tools support the following fields:

Character Description
B The date and time when the backup creation was beginning. The format used to print the dates and times can be set using the --date-format.
C The backup file creation date and time. The format used to print the dates and times can be set using the --date-format.
d The names of the databases in a comma-separated string-list.
D The description of the backup. If the c the modifier is used (e.g. %cD) the configured description is shown.
e The word “ENCRYPTED” or “UNENCRYPTED” depending on the encryption status of the backup.
E The date and time when the backup creation was ended. The format used to print the dates and times can be set using the --date-format.
F The archive file name.
H The backup host (the host that created the backup). If the c modifier is used (e.g. %cH) the configured backup host is shown.
I The numerical ID of the backup.
i The numerical ID of the cluster to which the backup belongs.
J The numerical ID of the job that created the backup.
M The backup method used. If the c modifier is used the configured backup method will be shown.
O The name of the owner of the backup.
P The full path of the archive file.
R The root directory of the backup.
S The name of the storage host, the host where the backup was stored.
s The size of the backup file measured in bytes.
t The title of the backup. They can be added when the backup is created, it helps to identify the backup later.
v The verification status of the backup. Possible values are “Unverified”, “Verified” and “Failed”.
% The percent sign itself. Use two percent signs, %% the same way the standard printf() function interprets it as a one percent sign.

Examples

  • Suppose we have a data node on 10.10.10.20 (port 3306) on cluster-id 2, and we want to backup all databases using mysqldump and store the backup on ClusterControl server:

    $ s9s backup --create \
            --backup-method=mysqldump \
            --cluster-id=2 \
            --nodes=10.10.10.20:3306 \
            --on-controller \
            --backup-directory=/storage/backups
    
  • Create a mongodump backup on 10.0.0.148 for cluster named ‘MongoDB ReplicaSet 5.0’ and store the backup on the database node:

    $ s9s backup --create \
            --backup-method=mongodump \
            --cluster-name='MongoDB ReplicaSet 5.0' \
            --nodes=10.0.0.148 \
            --backup-directory=/storage/backups
    
  • Schedule a full backup using MariaDB backup every midnight at 1:10 AM:

    $ s9s backup --create \
            --backup-method=mariabackupfull \
            --nodes=10.10.10.19:3306 \
            --cluster-name=MDB101 \
            --backup-dir=/home/vagrant/backups \
            --on-controller \
            --recurrence='10 1 * * *'
    
  • Schedule an incremental backup using MariaDB backup every day at 1:30 AM:

    $ s9s backup --create \
            --backup-method=mariabackupincr \
            --nodes=10.10.10.19:3306 \
            --cluster-name=MDB101 \
            --backup-dir=/home/vagrant/backups \
            --on-controller \
            --recurrence='30 1 * * *'
    
  • Create a pg_dumpall backup on PostgreSQL master server and store the backup on ClusterControl server:

    $ s9s backup --create \
            --backup-method=pgdump \
            --nodes=192.168.0.81:5432 \
            --cluster-id=43 \
            --backup-dir=/home/vagrant/backups  \
            --on-controller \
            --log
    
  • List all backups for cluster ID 2:

    $ s9s backup --list \
            --cluster-id=2 \
            --long \
            --human-readable
    
    Tip

    Omit the--cluster-id=2 to see the backup records for all clusters.

  • Restore backup ID 3 on cluster ID 2:

    $ s9s backup --restore \
            --cluster-id=2 \
            --backup-id=3 \
            --wait
    
    Note

    If the backup is encrypted, it will be automatically decrypted when restoring.

  • Create a job to verify the given backup identified by the backup ID. The job will attempt to install MySQL on the test server using the same settings as for the given cluster, then restore the backup on this test server. The job returns OK only if the backup is successfully restored on the test server:

    $ s9s backup --verify \
            --log \
            --backup-id=1 \
            --test-server=192.168.0.55 \
            --cluster-id=1
    
  • Delete the local backup copy after uploading it to the cloud for cluster ID 1:

    $ s9s backup \
            --create \
            --cluster-id=1 \
            --nodes=192.168.0.209 \
            --delete-after-upload \
            --cloud-provider="aws" \
            --s3-bucket="my-aws-bucket-eu-west-1" \
            --credential-id=2 \
            --backup-method=xtrabackupfull \
            --wait
    
  • Delete old backups for cluster ID 1 that are longer than 7 days, but do not delete at least 3 of the latest backups:

    $ s9s backup --delete-old \
            --cluster-id=1 \
            --backup-retention=7 \
            --safety-copies=3 \
            --log
    
  • Delete a backup schedule with job ID 1:

    $ s9s backup \
            --delete-schedules \
            --job-id=1
    
  • Perform a point-in-time restoration at a particular date and time for backup ID 10 on cluster ID 1:

    $ s9s backup \
            --restore \
            --cluster-id=1 \
            --backup-id=10 \
            --pitr-stop-time="2020-07-14 14:27:04" \
            --log
    
  • Perform a PostgreSQL restoration without applying WAL backup ID 10 on cluster ID 1:

    $ s9s backup \
            --restore \
            --cluster-id=1 \
            --backup-id=10 \
            --psql-immediate \
            --log
    
  • Create a snapshot repository on an Elasticsearch cluster with cluster ID 1:

    $ s9s backup \
          --create-snapshot-repository \
          --cluster-id=1 \
          --snapshot-repo-type=s3 \
          --snapshot-repository=mySnapshotRepository \
          --credential-id=1 \
          --s3-bucket=elastic-s3-test \
          --s3-region=eu-west-3 \
          --wait
    
  • List a snapshot repository on a elasticsearch cluster with cluster ID 1:

    $ s9s backup \
          --list-snapshot-repository \
          --cluster-id=1 \
    
  • Delete a snapshot repository on a elasticsearch cluster with cluster ID 1:

    $ s9s backup \
          --delete-snapshot-repository \
          --cluster-id=1 \
          --snapshot-repository=mySnapshotRepository \
          --wait
    
  • Generate a backup streaming it directly to the cloud with cluster ID 1:

    $ s9s backup \
        --create \
        --cluster-id=1 \
        --nodes=192.168.0.209 \
        --cloud-only \
        --cloud-provider="aws" \
        --s3-bucket="my-aws-bucket-eu-west-1" \
        --credential-id=2 \
        --backup-method=xtrabackupfull \
        --wait
    

s9s-cluster

Create, manage, and manipulate clusters.

Usage

s9s cluster {command} {options}

Commands

Name, shorthand Description
−−add-node Adds a new node (server) to the cluster or to be more precise creates a new job that will eventually add a new node to the cluster. The name (or IP address) of the node should be specified using the --nodes command-line option. See Node List.
−−available-upgrades Shows the available packages to upgrade the cluster with.
−−change-config Changes the configuration values for the cluster. The cluster configuration in this context is the Cmon Controller’s configuration for the given cluster.
−−check-hosts Checks the hosts before installing a cluster.
−−check-pkg-upgrades Checks available cluster packages upgrades.
−−collect-logs Creates a job that will collect the log files from the nodes of the cluster.
−−create Creates a new cluster. When this command-line option is provided the program will contact the controller and register a new job that will eventually create a new cluster.
−−create-account Creates a new account to be used on the cluster to access the database(s).
−−create-database Creates a database on the cluster.
−−create-report When this command-line option is provided a new job will be started that will create a report. After the job is executed the report will be available on the controller. If the --output-dir the command-line option is provided the report will be created in the given directory on the controller host.
−−delete-account Deletes an existing account from the cluster.
−−delete-database Creates a new job that will delete a database from the cluster.
--deploy-agents Starts a job to deploy agent-based monitoring using Prometheus and exporters to the nodes.
--deploy-cmonagents Starts a job to deploy Cmon Agents (TopQuery Monitoring) to the nodes.
−−disable-recovery Creates a new job that will disable the auto-recovery for the cluster (both cluster auto-recovery and node auto-recovery). The job can optionally be used to also register a maintenance period for the cluster.
--disable-ssl Disable SSL connections on the nodes.
−−drop Drops cluster from the controller.
−−enable-recovery Creates a job that will enable auto-recovery for both the cluster and the nodes in the cluster.
--enable-ssl Enable SSL connections on the nodes.
−−import-config Creates a job that will import all the configuration files from the nodes of the cluster.
−−import-sql-users Imports SQL users to the load balancer. Depending on the actual load balancer this can be only import or complete update of the user authentication information known by the load balancer. This is only supported by PgBouncer at the moment. Adds a new node (server) to the cluster or to be more precise creates a new job that will eventually add a new node to the cluster. The load balancer nodes where the users are to be imported shall be specified using the --nodes command line option. See Node List.
−−list-L Lists the clusters.
−−list-config This command-line option can be used to print the configuration values for the cluster. The cluster configuration in this context is the Cmon Controller’s configuration for the given cluster.
−−list-databases Lists the databases found on the cluster. Please note that if the cluster has a lot of databases, this option might not show some of them. Sampling a huge number of databases would generate a high load and so the controller has an upper limit built into it.
−−ping Checks the connection to the controller.
−−promote-slave Promotes a slave node to become a master. This main option will of course work only on clusters where it is meaningful, where there are slaves and masters are possible.
--reconfigure-node Reconfigures specified nodes of the cluster.
--reinstall-node Reinstalls software and also reconfigures it on nodes.
−−register Registers an existing cluster in the controller. This option is very similar to the --create option, but it of course will not install a new cluster, it just registers one.
−−remove-node Removes a node from the cluster (creates a new job that will remove the node from the cluster). The name (or IP address) of the node should be specified using the --nodes command line option. See Node List.
−−rolling-restart Restarts the nodes (one node at a time) without stopping the cluster.
−−set-read-only Creates a job that when executed will set the entire cluster into read-only mode. Please note that not every cluster type supports the read-only mode.
−−start Creates a new job to start the cluster.
−−stat Prints the details of one or more clusters.
−−stop Creates and registers a new job that will stop the cluster when executed.
--sync Synchronize cluster list with UI Frontend.
−−upgrade-cluster Upgrades cluster packages while keeping the same major version.
−−upgrade-to-version The new newer major version to upgrade to. Without this, only a minor upgrade will be done.
−−upgrade-method Strategy for doing a major upgrade. For PostgreSQL, there are two methods supported: copy and link. The default value is the copy method. The copy method will copy all data by doing a backup of the old version and restoring it to the new version. With the link method, the data files won’t be copied, instead, hard links will be created to the old version’s data files.
−−uninstall-cmonagents Starts a job to remove Cmon Agents (TopQuery Monitoring) from the nodes.

Options

Name, shorthand Description
−−backup-id=NUMBER The ID of a backup to be restored on the newly created cluster.
−−batch Print no messages. If the application created a job print only the job ID number and exit. If the command prints data do not use syntax highlight, headers, or totals, only the pure table to be processed using filters.
−−cluster-format=FORMATSTRING The string controls the format of the printed information about clusters. See Cluster Format.
−−cluster-id=ID-i The ID of the cluster to manipulate.
−−cluster-name=NAME-n Sets the cluster name. If the operation creates a new cluster this will be the name of the new cluster.
−−cluster-type=TYPENAME The type of cluster to install. Currently, the following types are supported: galera, mysqlreplication,  groupreplication (or group_replication), ndb (or ndbcluster), mongodb (MongoDB ReplicaSet only) and postgresql.
−−config-template=FILENAME Use the specified file as a configuration template to create the configuration file for the new cluster.
--create-local-repository Create a local software (APT/YUM) repository mirror when installing software packages. Using this command line option it is possible to deploy clusters and add nodes offline, without a working internet connection.
−−datadir=DIRECTORY The directory on the node(s) that will hold the data. The primary use for this command-line option is to set the data directory path when a cluster is created.
−−db-admin=USERNAME The user name of the database administrator (e.g. ‘root’).
−−db-admin-passwd=PASSWD The password for the database admin.
−−donor=ADDRESS Currently, this option is used when starting a cluster. It can be used to control which node will be started first and used for the others as donors.
--enterprise-token=TOKEN The customer’s Repo/Download Token for an Enterprise Database.
−−job-tags=LIST Tags for the job if a job is created.
--keep-firewall When not specified the CLI will pass the disable firewall option to create cluster and node addition operations. To keep your firewall settings you may pass this option. This option can also be set in the s9s configuration file using the keep_firewall keyword (check s9s.conf(5) for further details).
--local-repository=NAME Use a local repository mirror created by ClusterControl for software deployment.
−−long-l Print the detailed list.
−−no-header Do not print headers for tables.
−−no-install Skip the cluster software installation part. Assume all software is installed on the node(s). This command-line option is considered when installing a new cluster or adding a new node to an existing cluster.
−−nodes=NODE_LIST List of nodes to work with. See Node List.
−−os-user=USERNAME The name of the remote user that is used to gain SSH access on the remote nodes. If this command-line option is omitted the name of the local user will be used on the remote host too.
−−os-key-file=PATH The path of the SSH key to install on a new container to allow the user to log in. This command-line option can be passed when a new container is created, the argument of the option should be the path of the private key stored on the controller. Although the path of the private key file is passed only the public key will be uploaded to the new container.
−−output-dir=DIR The directory where the files are created. Use in conjunction with--create-report command.
−−percona-client-id=CLIENTID The client ID for the Percona Pro repository.
−−percona-pro-token=TOKEN The token for the Percona Pro repository.
−−provider-version=VERSION The version string of the software to be installed.
−−remote-cluster-id=ID The remote cluster-ID is used for cluster creation when cluster-to-cluster replication is to be installed. Please note that not all the cluster types support cluster-to-cluster replication.
−−semi-sync=[true|false] Specifies the semi-sync mode for MySQL Replication.
−−use-internal-repos Use internal repositories when installing software packages. Using this command-line option it is possible to deploy clusters and add nodes offline, without a working internet connection. The internal repositories have to be set up in advance.
−−vendor=VENDOR The name of the software vendor to be installed.
−−wait Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set.
--with-ssl Sets up SSL while creating a new cluster.
--with-database Create a new database for the account when creating a new database user account.
--without-ssl If this option is provided SSL will not be set up while creating a new cluster.
−−with-tags=LIST Limit the list of printed clusters by tags. See Cluster Tagging.
−−without-tags=LIST Limit the list of printed clusters by tags. See Cluster Tagging.
−−with-timescaledb Install the TimescaleDB option when creating a new cluster. This is currently only supported on PostgreSQL systems.
ACCOUNT, DATABASE & CONFIGURATION MANAGEMENT
−−account=NAME[:PASSWD][@HOST Account to be created on the cluster.
−−db-name=NAME The name of the database.
−−opt-group=NAME The option group for configuration.
−−opt-name=NAME The name of the configuration item.
−−opt-value=VALUE The value for the configuration item.
−−with-database Create a database for the user too.
CONTAINER & CLOUD
−−cloud=PROVIDER This option can be used when new container(s) are created. The name of the cloud provider where the new container will be created. This command-line option can also be used to filter the list of the containers when used together with one of the --list or --stat options.
−−containers=LIST A list of containers to be created and used by the created job. This command-line option can be used to create containers (virtual machines) and install clusters on them or just add them to an existing cluster as nodes. See s9s container for details.
−−credential-id=ID The cloud credential ID that should be used when creating a new container. This is an optional value, if not provided the controller will find the credential to be used by the cloud name and the chosen region.
−−firewalls=LIST List of firewall (security groups) IDs separated by, or; to be used for newly created containers. See s9s-container for further details.
−−generate-key Create a new SSH key pair when creating new containers. If this command-line option was provided a new SSH key pair will be created and registered for a new user account to provide SSH access to the new container(s). If the command creates more than one container the same one keypair will be registered for all. The username will be the username of the authenticated cmon-user. This can be overruled by the --os-user command line option. When the job creates a new cluster the generated keypair will be registered for the cluster and the file path will be saved into the cluster’s Cmon configuration file. When adding a node to such a cluster this --generate-key option should not be passed, the controller will automatically re-use the previously created key pair.
−−image=NAME The name of the image from which the new container will be created. This option is not mandatory, when a new container is created the controller can choose an image if it is needed. To find out what images are supported by the registered container servers please issue the s9s server --list-images command.
−−image-os-user=NAME The name of the initial OS user is defined in the image for the first login. Use this option to create containers based on custom images.
−−os-password=PASSWORD This command-line option can be passed when creating new containers to set the password for the user that will be created on the container. Please note that some virtualization backends might not support passwords, only keys.
−−subnet-id=ID This option can be used when new containers are created to set the subnet ID for the container. To find out what subnets are supported by the registered container servers please issue the s9s server --list-subnets command.
−−template=NAME The name of the container template. See Container Template.
−−volumes=LIST When a new container is created this command-line option can be used to pass a list of volumes that will be created for the container. The list can contain one or more volumes separated by the ; character. Every volume consists of three properties separated by the : character, a volume name, the volume size in gigabytes, and a volume type that is either “HDD” or “SSD”. The string vol1:5:hdd;vol2:10:hdd for example defines two hard-disk volumes, one 5GByte, and one 10GByte. For convenience the volume name and the type can be omitted, so that automatically generated volume names are used.
−−vpc-id=ID This option can be used when new containers are created to set the VPC ID for the container. To find out what VPCs are supported by the registered container servers please issue the s9s server --list-subnets --long command.
LOAD BALANCER
−−admin-password=PASSWORD The password for the administrator of load balancers.
−−admin-user=USERNAME The username for the administrator of load balancers.
−−dont-import-accounts If this option is provided the database accounts will not be imported after the load balancer is installed and added to the cluster. The accounts can be imported later, but it is not going to be part of the load balancer installation performed by the controller.
−−haproxy-config-template=FILENAME Configuration template for the HAProxy installation.
−−monitor-password=PASSWORD The password of the monitoring use of the load balancer.
−−monitor-user=USERNAME The username of the monitoring use of the load balancer.
−−maxscale-mysql-user=USERNAME The MySQL username of the Maxscale balancer.
−−maxscale-mysql-password=PASSWORD The password of the MySQL user of the Maxscale balancer.
SSL
--ssl-ca=PATH The SSL CA file path on the controller.
--ssl-cert=PATH The SSL certificate file path on the controller.
--ssl-key=PATH The SSL key file path on the controller.
--ssl-pass=PASSWD The password for an existing CA private key when registering a cluster.
--move-certs-dir=PATH The path to the directory where the SSL certificates are stored will be moved on the imported cluster. (Please ommit initial /var/lib/cmon/ca).

Cluster List

Using the --list and --long command-line options a detailed list of the clusters can be printed. Here is an example of such a list:

$ s9s cluster --list --long
ID STATE TYPE OWNER GROUP NAME COMMENT
1 STARTED replication pipas users mysqlrep All nodes are operational.
Total: 1

The list contains the following fields:

Field Description
ID The cluster ID of the given cluster.
STATE A short string describing the state of the cluster. Possible values are MGMD_NO_CONTACTSTARTEDNOT_STARTEDDEGRADEDFAILURESHUTTING_DOWNRECOVERINGSTARTINGUNKNOWNSTOPPED.
TYPE The type of the cluster. Possible values are mysqlclusterreplicationgaleragroup_replmongodbmysql_singlepostgresql_single.
OWNER The user name of the owner of the cluster.
GROUP The group owner’s name.
NAME The name of the cluster.
COMMENT A short human-readable description of the current state of the cluster.

Node List

The list of nodes or hosts is enumerated in a special string using a semicolon as a field separator (e.g. 192.168.1.1;192.168.1.2). The strings in the node list are URLs that can have the following protocols:

URI Description
mysql:// The protocol to install and handle MySQL servers.
ndbd:// The protocol for MySQL Cluster (NDB) data node servers.
ndb_mgmd:// The protocol for MySQL Cluster (NDB) management node servers. The mgmd:// notation is also accepted.
haproxy:// Used to create and manipulate HaProxy servers.
proxysql:// Use this to install and handle ProxySql servers.
maxscale:// The protocol to install and handle MaxScale servers.
mongos:// The protocol to install and handle mongo router servers.
mongocfg:// The protocol to install and handle mongo config servers
mongodb:// The protocol to install and handle mongo data servers.
pgbackrest:// The protocol to install and handle the PgBackRest backup tool.
pgbouncer:// The protocol to install and handle PgBouncer servers.
pbmagent:// The protocol to install and handle PBMagent (Percona Backup for MongoDB agent) servers.

Cluster Format

The string controls the format of the printed information about clusters. When this command-line option is used, the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the clusters.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 characters wide and the + or - the sign will always be printed with the number. The properties of the message are encoded by letters. The in the%-5I for example the letterI encodes the “cluster ID” field, so the numerical ID of the cluster will be substituted. Standard  \ a notation is also available, \n for example encodes a new-line character.

The s9s-tools support the following fields:

Field Description
a The number of active alarms on the cluster.
C The configuration file for the cluster.
c The total number of CPU cores in the cluster. Please note that this number may be affected by hyper-threading. When a computer has 2 identical CPUs, with four cores each, and uses 2x hyper-threading it will count as 2x4x2 = 16.
D The domain name of the controller of the cluster. This is the string one would get if executed the “domain name” command on the controller host.
G The name of the group owner of the cluster.
H The hostname of the controller of the cluster. This is the string one would get if executed the “hostname” command on the controller host.
h The number of the hosts in the cluster including the controller itself.
I The numerical ID of the cluster.
i The total number of monitored disk devices (partitions) in the cluster.
k The total number of disk bytes found on the monitored devices in the cluster. This is a double-precision floating-point number measured in Terabytes. With the f modifier (e.g. %6.2fk) this will report the free disk space in TeraBytes.
L The log file of the cluster.
M A human-readable short message that describes the state of the cluster.
m The size of the memory of all the hosts in the cluster added together, measured in GBytes. This value is represented by a double-precision floating-point number, so formatting it with precision (e.g. %6.2m) is possible. When used with the f modifier (e.g.%6.2fm) this reports the free memory, the memory that is available for allocation, used for cache, or used for buffers.
N The name of the cluster.
n The total number of monitored network interfaces in the cluster.
O The name of the owner of the cluster.
P The CDT path of the cluster.
S The state of the cluster.
T The type of the cluster
t The total network traffic (both received and transmitted) measured in MBytes/seconds found in the cluster.
V The vendor and the version of the main software (e.g. the MySQL server) on the node.
U The number of physical CPUs on the host.
u The CPU usage percent found on the cluster.
w The total swap space found in the cluster measured in GigaBytes. With the f modifier (e.g. %6.2fk) this reports the free swap space in GigaBytes.
% The % character itself.

Cluster Tagging

The concept is very similar to the hash-tags used by popular services such as Twitter and Instagram. A cluster can be created with tags, by using --with-tags the option:

$ s9s cluster --create \
        --cluster-name="MyMariaDBGaleraCluster" \
        --cluster-type=galera \
        --provider-version="10.5" \
        --vendor=mariadb  \
        --nodes="mysql://10.10.10.10?hostname_internal=2323" \
        --os-user=vagrant \
        --os-key-file=~/.ssh/id_rsa \
        --with-tags="MDB;DC1;PRODUCTION" \
        --log \
        --print-request

Multiple values are supported using a semi-colon as a delimiter. The tag values are case-sensitive.

An existing cluster can be tagged with --add-tag and --tag options under by specifying the CMON tree path, retrievable using the tree command. To retrieve the “tree path”, one has to list out the CMON object tree and look for column “NAME”, as shown in the following example:

$ s9s tree --list --long
MODE       SIZE OWNER   GROUP   NAME
crwxrwx--- -    system  admins  MariaDB Replication 10.3
srwxrwxrwx -    system  admins  localhost
drwxrwxr-- 1, 0 system  admins  groups
urwxr--r-- -    admin   admins  admin
urwxr--r-- -    nobody  admins  nobody
urwxr--r-- -    system  admins  system

Then, specify the tree path as / + tree name (“MariaDB Replication 10.4”), as shown in the following:

$ s9s tree --add-tag --tag="REPLICATION;PRODUCTION" "/MariaDB Replication 10.4"
Tag is added.

To show all clusters having a certain tag:

$ s9s cluster --list --long --with-tags="PRODUCTION"

Show all cluster that does not have a certain tag:

$ s9s cluster --list --long --without-tags="PRODUCTION"

To filter using multiple tag values:

$ s9s cluster --list --long --with-tags="PRODUCTION;DEV"

Examples

  • Create a three-node Percona XtraDB Cluster 5.7 cluster, with OS user vagrant:

    $ s9s cluster --create \
            --cluster-type=galera \
            --nodes="10.10.10.10;10.10.10.11;10.10.10.12" \
            --vendor=percona \
            --provider-version=8.0 \
            --db-admin-passwd='pa$word' \
            --os-user=vagrant \
            --os-key-file=/home/vagrant/.ssh/id_rsa \
            --cluster-name='Percona XtraDB Cluster 5.7'
    
  • Create a three-node MongoDB Replica Set 5.0 by MongoDB Inc (formerly 10gen) and use the default /root/.ssh/id_rsa as the SSH key, and let the deployment job run in the foreground:

    $ s9s cluster --create \
            --cluster-type=mongodb \
            --nodes="10.0.0.148;10.0.0.189;10.0.0.219" \
            --vendor=10gen \
            --provider-version='5.0' \
            --os-user=root \
            --db-admin='admin' \
            --db-admin-passwd='MyS3cr3tPass' \
            --cluster-name='MongoDB ReplicaSet 5.0' \
            --wait
    
  • Create a MongoDB Sharded Cluster with 3 mongos, 3 mongo config, and one shard consists of a three-node replica set called ‘replset2’:

    $ s9s cluster --create \
            --cluster-type=mongodb \
            --vendor=10gen \
            --provider-version=5.0 \
            --db-admin=adminuser \
            --db-admin-passwd=adminpwd \
            --os-user=root \
            --os-key-file=/root/.ssh/id_rsa \
            --nodes="mongos://192.168.1.11;mongos://192.168.1.12;mongos://192.168.1.12;mongocfg://192.168.1.11;mongocfg://192.168.1.12;mongocfg://192.168.1.13;192.168.1.14?priority=5.0;192.168.1.15?arbiter_only=true;192.168.1.16?priority=2;192.168.1.17?rs=replset2;192.168.1.18?rs=replset2&arbiter_only=yes;192.168.1.19?rs=replset2&slave_delay=3&priority=0"
    
  • Import and existing Percona XtraDB Cluster 8.0 and let the deployment job running in the foreground (provided passwordless SSH from ClusterControl node to all database nodes have been set up correctly):

    $ s9s cluster --register \
            --cluster-type=galera \
            --nodes="192.168.100.34;192.168.100.35;192.168.100.36" \
            --vendor=percona \
            --provider-version=8.0 \
            --db-admin="root" \
            --db-admin-passwd="root123" \
            --os-user=root \
            --os-key-file=/root/.ssh/id_rsa \
            --cluster-name="My DB Cluster" \
            --wait
    
  • Create a MySQL 8.0 replication cluster by Oracle with multiple master and slaves (note the ? sign to identify the node’s role in the --nodes parameter):

    $ s9s cluster --create \
            --cluster-type=mysqlreplication \
            --nodes="192.168.1.117?master;192.168.1.113?slave;192.168.1.115?slave;192.168.1.116?master;192.168.1.118?slave;192.168.1.119?slave;" \
            --vendor=oracle \
            --db-admin="root" \
            --db-admin-passwd="root123" \
            --cluster-name=ft_replication_23986 \
            --provider-version=8.0 \
            --log
    
  • Create a Redis Cluster v7 with 6 Redis nodes:

    $s9s cluster --create \
            --cluster-type=redis-sharded \
            --redis-port=6379 
            --redis-bus-port=16579 \
            --vendor=redis \
            --provider-version=7 \
            --node-timeout-ms=5000 \
            --replica-validity-factor=10 \
            --db-admin=s9s-user \
            --db-admin-pass=s9ss9s \
            --nodes="redis-primary://rlc2:6479;redis-replica://rlc3:6479;redis-primary://rlc4:6479;redis-replica://rlc5:6479;redis-primary://rlc6;redis-replica://rlc7" \
            --os-user=root \
            --os-key-file=/root/.ssh/id_rsa \
            --cluster-name="My Redis Cluster v7" \
            --log \
            --print-request
    
  • Create a Redis Sentinel v6 with one master and two slaves:

    $s9s cluster --create \
            --cluster-type=redis \
            --nodes="redis://rlc2;redis://rlc3;redis://rlc4;redis-sentinel://rlc2;redis-sentinel://rlc3;redis-sentinel://rlc4" \
            --os-user=root \
            --os-key-file=/root/.ssh/id_rsa \
            --vendor=redis \
            --provider-version=6  \
            --log \
            --print-request \
            --cluster-name="My Redis Sentinel v6"
    
  • Import and existing Redis Cluster and let the deployment job run in the foreground (provided passwordless SSH from ClusterControl node to all database nodes have been set up correctly):

    $ s9s cluster --register \
            --cluster-type=redis-sharded \
            --redis-port=6379 \
            --db-admin=s9s-user \
            --db-admin-passwd=s9ss9s \
            --os-user=root \
            --os-key-file=/root/.ssh/id_rsa \
            --nodes="redis-primary://rlc3:6479" \
            --vendor=redis \
            --log \
            --cluster-name="My Redis Cluster" \
            --print-request \
            --wait
    
  • Create a PostgreSQL 12 streaming replication cluster with one master and two slaves (note the ? sign to identify the node’s role in the --nodes parameter):

    $ s9s cluster --create \
            --cluster-type=postgresql \
            --nodes="192.168.1.81?master;192.168.1.82?slave;192.168.1.83?slave;" \
            --db-admin="postgres" \
            --db-admin-passwd="mySuperStongP455w0rd" \
            --cluster-name=ft_replication_23986 \
            --os-user=vagrant \
            --os-key-file=/home/vagrant/.ssh/id_rsa \
            --provider-version=12 \
            --log
    
  • List all clusters with more details:

    $ s9s cluster --list --long
    
  • Delete a cluster with cluster ID 1:

    $ s9s cluster --delete --cluster-id=1
    
  • Add a new database node on Cluster ID 1:

    $ s9s cluster --add-node \
            --nodes=10.10.10.14 \
            --cluster-id=1 \
            --wait
    
  • Delete a database from the cluster name galera_001.

    $ s9s cluster \
            --delete-database \
            --print-request \
            --cluster-name="galera_001" \
            --db-name="my_database" \
            --log
    
  • Add a data node to an existing MongoDB Sharded Cluster with cluster ID 12 having replica set name ‘replset2’:

    $ s9s cluster --add-node \
            --cluster-id=12 \
            --nodes="mongodb://192.168.1.20?rs=replset2"
    
  • Create an HAProxy load balancer, 192.168.55.198 on cluster ID 1:

    $ s9s cluster --add-node \
            --cluster-id=1 \
            --nodes="haproxy://192.168.55.198" \
            --wait
    
  • Remove a database node from cluster ID 1 as a background job:

    $ s9s cluster --remove-node \
            --nodes=10.10.10.13 \
            --cluster-id=1
    
  • Check if the hosts are part of other cluster and accessible from ClusterControl:

    $ s9s cluster --check-hosts \
            --nodes="10.0.0.148;10.0.0.189;10.0.0.219"
    
  • Schedule a rolling restart of the cluster 20 minutes from now:

    $ s9s cluster --rolling-restart \
            --cluster-id=1 \
            --schedule="$(date -d 'now + 20 min')"
    
  • Create a database on the cluster with the given name:

    $ s9s cluster --create-database \
            --cluster-id=2 \
            --db-name=my_shopping_db
    
  • Create a database account on the cluster and also create a new database to be used by the new user. Grant all access to the new database for the new user:

    $ s9s cluster --create-account \
            --cluster-id=1 \
            --account="john:myPassword135" \
            --with-database
    
  • Create a cluster and tag it using the --with-tags option:

    $ s9s cluster --create \
            --cluster-name="TAGGED_MDB" \
            --cluster-type=galera \
            --provider-version="10.4" \
            --vendor=mariadb \
            --nodes="mysql://10.10.10.14:3306" \
            --os-user=vagrant \
            --with-tags="MDB;DC1;PRODUCTION" \
            --log
    
  • List all databases for every cluster:

    $ s9s cluster --list-databases --long
    SIZE      #TBL #ROWS   OWNER  GROUP  CLUSTER              DATABASE
            0    0       - system admins MySQL Rep Oracle 8.0 sys
    381681664    7  690984 system admins MySQL Rep Oracle 8.0 db5
    381681664    7  690984 system admins MySQL Rep Oracle 8.0 db4
    599785472   11 1083295 system admins MySQL Rep Oracle 8.0 db3
    272629760    5  493560 system admins MySQL Rep Oracle 8.0 db2
    381681664    7  690984 system admins MySQL Rep Oracle 8.0 db1
      7340032    2       0 system admins PostgreSQL 13        postgres
      7340032    0       0 system admins PostgreSQL 13        template1
      7340032    0       0 system admins PostgreSQL 13        template0
      7340032    0       0 system admins PostgreSQL 13        db1
      7340032    0       0 system admins PostgreSQL 13        db2
    Total: 11 databases, 2059403264, 39 tables.
    
  • Show all clusters having a certain tag (an existing cluster can be tagged using s9s tree), with multiple tags separated by a semi-colon:

    $ s9s cluster --list \
            --long \
            --with-tags="PRODUCTION;big_cluster"
    
  • List out CMON configuration options for cluster ID 2:

    $ s9s cluster --list-config --cluster-id=2
    
  • Change a CMON configuration option called diskspace_warning for cluster ID 2 (the configuration change will be applied on CMON runtime and configuration file, /etc/cmon.d/cmon_2.cnf):

    $ s9s cluster --change-config \
            --cluster-id=2 \
            --opt-name=diskspace_warning \
            --opt-value=70
    
  • Upgrade cluster packages while keeping the major version for cluster ID 1.

    $ s9s cluster \
            --upgrade-cluster \
            --cluster-id=1 \
            --nodes="192.168.0.84:5433;192.168.0.85" \
            --wait
    
  • Upgrade cluster packages to a newer major version for cluster ID 1.

    $ s9s cluster \
            --upgrade-cluster \
            --upgrade-to-version=12 \
            --cluster-id=1 \
            -- log
    
  • Upgrade cluster packages to a newer major version for cluster ID 1 with a specific upgrade method.

    $ s9s cluster \
            --upgrade-cluster \
            --upgrade-to-version=12 \
            --upgrade-method=link \
            --cluster-id=1 \
            -- log
    
  • Deploys CMON agents to all nodes in cluster ID 1. (This enables top query monitoring functionality for the cluster nodes.)

    $ s9s cluster \
            --deploy-cmonagents \
            --cluster-id=1 \
            --print-request \
            --log \
            --wait
    
  • Uninstalls CMON agents from specified nodes in cluster ID 1. (If no node is specified, CMON agents will be uninstalled from all nodes in the cluster.)

    $ s9s cluster \
            --uninstall-cmonagents \
            --cluster-id=1 \
            --nodes=10.67.199.164 \
            --print-request \
            --log \
            --wait
    

s9s-container

Manage cloud and container virtualization. Multiple technologies (multiple virtualization backends) are supported (e.g. Linux LXC and AWS) providing various levels of virtualization. Throughout this documentation (and in fact in the command line options) s9s uses the word “container” to identify virtualized servers. The actual virtualization backend might use the term “virtual machine” or “Linux container” but s9s provides a high-level generic interface to interact with them, so the generic “container” term is used. So please note, the term “container” does not necessarily mean “Linux container”, it means “a server that is running in some kind of virtualized environment”.

In order to utilize the s9s command-line tool and the CMON Controller to manage virtualization, a virtualization host (container server) has to be installed first. The installation of such a container environment is documented in the s9s server.

Usage

s9s container {command} {options}

Commands

Name, shorthand Description
−−create Creates and starts a new container or virtual machine. If this option is provided, the controller will create a new job that creates a container. By default the container will also be started, an account will be created, passwordless sudo granted and the controller will wait for the controller to obtain an IP address.
−−delete Stop and delete the container or virtual machine.
−−list-L Lists the containers. See Container List.
−−start Starts an existing container.
−−stat Prints the details of a container.
−−stop Stops the container. This will not remove the container by default, but it will stop it. If the container is set to be deleted on stop (temporary) it will be deleted.

Options

Name, shorthand Description
−−log Waits until the job is executed. While waiting the job logs will be shown unless the silent mode is set.
−−recurrence=CRONTABSTRING This option can be used to create recurring jobs, jobs that are repeated over and over again until they are manually deleted. Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The options argument is a crontab style string defining the recurrence of the job. See Crontab.
−−schedule=DATETIME The job will not be executed now but it is scheduled to execute later. The DateTime string is sent to the backend, so all the formats are supported by the controller.
−−timeout=SECONDS Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, the controller might ignore this value.
−−wait Waits until the job is executed. While waiting a progress bar will be shown unless the silent mode is set.
−−cloud=PROVIDER This option can be used when a new container(s) created. The name of the cloud provider where the new container will be created. This command-line option can also be used to filter the list of the containers when used together with one of the --list or --stat options.
--container-format=FORMATSTRING The string controls the format of the printed information about the containers. See Container Format.
−−containers=LIST A list of containers to be created or managed. The containers can be passed as command-line options (suitable for simple commands) or as an optional argument for this command-line option. The s9s container --stop node01 and the s9s container --stop --containers=node01 commands for example are equivalent. See Create Container List.
−−credential-id=ID The cloud credential ID that should be used when creating a new container. This is an optional value, if not provided the controller will find the credential to be used by the cloud name and the chosen region.
−−firewalls=LIST List of firewall (AKA security groups) IDs separated by , or ; to be used for newly created containers. This is not a mandatory option, if the virtualization server needs a firewall to be set one such a firewall will be automatically created. Containers created in the same job (for example in a create cluster operation) the containers will share the same firewall, so they will be able to communicate.If the container is created so that it will be added to an existing cluster (e.g. in an add node job) the controller will try to find the firewall of the existing nodes and if it exists will re-use the same ID, so that the nodes can reach each other.
−−generate-key Create a new SSH keypair when creating new containers. If this command line option was provided a new SSH keypair will be created and registered for a new user account to provide SSH access to the new container(s). If the command creates more than one containers the same one keypair will be registered for all. This command line option is actually useful for the cases when a new cluster is created together with the new containers.
−−image=NAME The name of the image from which the new container will be created. This option is not mandatory, when a new container is created the controller can choose an image if it is needed. To find out what images are supported by the registered container servers please issue the s9s server --list-images command.
−−image-os-user=NAME The name of the initial OS user defined in the image for the first login. Use this option to create containers based on custom images.
−−os-key-file=PATH The path of the SSH key to install on a new container to allow the user to log in. This command-line option can be passed when a new container is created, the argument of the option should be the path of the private key stored on the controller. Although the path of the private key file is passed only the public key will be uploaded to the new container.
−−os-password=PASSWORD This command-line option can be passed when creating new containers to set the password for the user that will be created on the container. Please note that some virtualization backend might not support passwords, only keys.
−−os-user=USERNAME This option may be used when creating new containers to pass the name of the user that will be created on the new container. Please note that this option is not mandatory, because the controller will create an account whose name is the same as the name of the cmon user creating the container. The public key of the cmon user will also be registered (if the user has an associated public key) so the user can actually log in.
−−region=REGION The name of the region the container is created.
−−servers=LIST A list of servers to work with.
−−subnet-id=ID This option can be used when new containers are created to set the subnet ID for the container. To find out what subnets are supported by the registered container servers please issue the s9s server --list-subnets command.
−−template=NAME The name of the container template. See Container Template.
−−volumes=LIST When a new container is created this command-line option can be used to pass a list of volumes that will be created for the container. See Volume List.
−−vpc-id=ID This option can be used when new containers are created to set the VPC ID for the container. To find out what VPCs are supported by the registered container servers please issue the s9s server --list-subnets --long command.

Container Format

The string controls the format of the printed information about the containers. When this command line option is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the containers.

The %+12i format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 characters wide and the + or - sign will always be printed with the number. The properties of the container are encoded by letters. The in the %16D for example the letter D encodes the data directory field, so the full path of the data directory on the container will be substituted. Standard \ notation is also available, \n for example, encodes a new-line character.

The s9s-tools support the following fields:

Field Description
A The IP address of the container. This is by default the public IPv4 address of the container. Containers being deleted/created might not have any IP addresses, then the - string is substituted.
a The private IP address of the container if there is any or the - string.
C The full path of the configuration file that stores the container settings if such a configuration file exists.
c The cloud (sometimes mentioned as ‘provider’) of the container, for example, AWS or AZ as it is set in the credentials file /var/lib/cmon/cloud_credentials.json.
F The name of the first firewall (security group) if the container has such a property set, the string - otherwise.
G The name of the group owner of the node.
I The ID of the container.
i The name of the image that was used to create the container.
N The name (alias) of the container.
O The username of the owner of the container.
S The state of the container as a string.
P The CDT path of the user.
p The name of the parent server, the container server that manages the container.
R The name of the region in which the container is hosted.
r The address range of the subnet the container belongs to in CIDR notation (e.g.10.0.0.0/24)
T The type of container (e.g. cmon-cloud or lxc).
t The name of the template that was used to create a container or the - string if no such template was used.
U The ID of the subnet of the container.
V The ID of the VPC for the container.
z The class name of the container object.
% The % character itself.

Container List

Using the --list and --long command-line options a detailed list of the containers can be printed. Here is an example of such a list:

$ s9s container --list --long
S TYPE TEMPLATE OWNER GROUP     NAME                 IP ADDRESS    SERVER
- lxc  -        pipas testgroup bestw_controller     -             core1
u lxc  -        pipas testgroup dns1                 192.168.0.2   core1
u lxc  ubuntu   pipas testgroup ft_containers_35698  192.168.0.228 core1
u lxc  -        pipas testgroup mqtt                 192.168.0.5   core1
- lxc  -        pipas testgroup ubuntu               -             core1
u lxc  -        pipas testgroup www                  192.168.0.19  core1
Total: 6 containers, 4 running.

The list contains the following fields:

Field Description
S The abbreviated status information. This is u for a container that is up and running and - otherwise.
TYPE Shows what kind of container or virtual machine shown in this line, the type of software that provides virtualization.
TEMPLATE The name of the template that is used to create the container.
OWNER The owner of the server object.
GROUP The group owner of the server object.
NAME The name of the container. This is not necessarily the hostname, this is a unique name to identify the container on the host.
IP ADDRESS The IP address of the container or the - character if the container has no IP address.
SERVER The server on which the container can be found.

Create Container List

The command-line option argument is one or more containers separated by the ; character. Each container is an URL defining the container name (an alias for the container) and zero or more properties. The string container05parent_server=core1;container06?parent_server=core2, for example, defines two containers one on one server and the other on the other server.

To see what properties are supported in the controller for the containers, one may use the following command:

$ s9s metatype --list-properties --type=CmonContainer --long
ST NAME         UNIT DESCRIPTION
r- acl          -    The access control list.
r- alias        -    The name of the container.
r- architecture -    The processor architecture.

See Property List for details.

Container Template

Defining a template is an easy way to set a number of complex properties without actually enumerating them in the command line one by one. The actual interpretation of the template name is up to the virtualization backend that is the protocol of the container server. The lxc backend for example considers the template to be an already created container, it simply creates the new container by copying the template container so the new container inherits everything.

The template name can also be provided as a property name for the container, so the command s9s container --create --containers="node02?template=ubuntu;node03"--log for example will create two containers, one using a template, the other using the default settings.

Note that the --template the command-line option is not mandatory, if emitted suitable default values will be chosen, but if the template is provided and the template is not found the creation of the new container will fail.

Volume List

The list can contain one or more volumes separated by the ; character. Every volume consists of three properties separated by the : character, a volume name, the volume size in gigabytes, and a volume type that is either “HDD” or “SSD”. The string vol1:5:hdd;vol2:10:hdd for example defines two hard-disk volumes, one 5GByte, and one 10GByte.

For convenience, the volume name and the type can be omitted, so that automatically generated volume names are used.

Examples

Create a container with no special information needed, every setting will use the default values. For this, of course, at least one container server has to be pre-registered and properly working:

$ s9s container --create --wait

Using the default, automatically chosen container names might not be the easiest way, so here is an example that provides a container name:

$ s9s container --create --wait node01

This is equivalent to the following example that provides the container name through a command-line option:

$ s9s container --create --wait --containers="node01"

s9s-controller

View and handle controller (CMON instance), allows building a highly available cluster of CMON instances to achieve ClusterControl’s high availability.

Note

CMON HA feature is still a beta feature.

This command can help setup CMON with high availability feature using the following simple steps:

  1. Install a CMON Controller together with the CMON Database serving as permanent storage for the controller. The CMON HA will not replicate the CMON Database, so it has to be accessible from all the controllers and if necessary it has to provide redundancy on itself.
  2. Enable the CMON HA subsystem using the --enable-cmon-ha on the running controller. This will create one CmonController class object. Check the object using the --list or --stat option. The CMON HA is now enabled, but there is no redundancy, only one controller is running. The one existing controller in this stage should be a leader although there are no followers.
  3. Install additional Cmon Controllers one by one and start them the usual way. The next controllers should use the same CMON Database and should have the same configuration files. When the additional controllers are started they will find the leader in the CMON Database and will ask the leader to let them join. When the join is successful one more CmonController will be created for every joining controller.

Usage

s9s controller {command} {options}

Commands

Name, shorthand Description
−−create-snapshot Creates a job that will create a controller to controller snapshot of the Cmon HA subsystem. Creating a snapshot manually using this command-line option is not necessary for the Cmon HA to operate, this command-line option is made for testing and repairing.
−−enable-cmon-ha Enabled the CMON HA subsystem. By default, Cmon HA is not enabled for compatibility reasons, so this command-line option is implemented to enable the controller to controller communication. When the CMON HA is enabled CmonController class objects will be created and used to implement the high availability features (e.g. the leader election). So if the controller at least one CmonController object, the CMON HA is enabled, if not, it is not enabled.
−−get-ldap-config Gets the LDAP configuration from the controller and prints it to the standard output.
−−list Lists the CmonController type objects known to the controller. If the CMON HA is not enabled there will be no such objects, if it is enabled one or more controllers will be listed. With the --long option a more detailed list will be shown where the state of the controllers can be checked.
−−ping Sends a ping request to the controller and prints the information received. Please note that there is another ping request for the clusters, but this ping request is quite different from that. This request does not need a cluster, it is never redirected (follower controllers will also reply to this request) and it is replied with some basic information about the Cmon HA subsystem.
−−stat Prints more details about the controller objects.

Examples

Create a controller to controller snapshot of CMON HA subsystem:

$ s9s controller \
        --create-snapshot \
        --log

Enable CMON HA feature for the current CMON instance:

$ s9s controller --enable-cmon-ha

The command line option itself is only a syntactic sugar to save a “true” boolean value into a CDT entry, so for example the following command can also be used:

$ echo "true" | \
s9s tree \
--save \
--cmon-user=system \
--password=secret \
--controller="https://10.10.1.23:9501" \
--batch \
/.runtime/cmon_ha/enabled

Print out LDAP configuration from the controller:

$ s9s controller \
        --cmon-user="system" \
        --password="XXXXXXXX" \
        --get-ldap-config \
        --print-json

List out all controllers participate in the CMON HA cluster:

$ s9s controller \
        --list \
        --long

Send ping request to the controller and print the output in JSON format, with some text filtering on the output:

$ s9s controller \
        --ping \
        --print-json \
        --json-format='status: ${controller_status}\n'

Print more details about the controller objects:

$ s9s controller --stat

s9s-job

View and manage jobs.

Usage

s9s job {command} {options}

Commands

Name, shorthand Description
−−clone Creates a copy of the job to re-run it. The clone will have all the properties the original job had and will be executed the same way as new jobs are executed. If the --cluster-id command-line option is used the given cluster will execute the job, if not, the clone will have the same cluster the original job had.
−−delete Deletes the job referenced by the job ID.
−−disable This command line option can be used to disable (pause) a scheduled or recurring job instance.
−−enable This command line option can be used to enable a disabled (scheduled or recurring) job instance.
−−fail Creates a job that does nothing and fails.
−−kill This command-line option can be used to send a signal to a running Cmon Job in order to abort its execution. The job subsystem is not preemptive in the controller, so the job only will be actually aborted if and when the job supports aborting what it is doing.
−−list-L Lists the jobs. See Job List.
−−log Prints the job messages of the specified job.
−−success Creates a job that does nothing and succeeds.
−−wait Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set.

Options

Name, shorthand Description
NEWLY CREATED JOB
−−f/--follow Combination of −−log and −−wait, s9s is going to attach to an existing running job and print out its job messages while it is running.
−−job-tags=LIST List one of more strings separated by either , or ; to be added as tags to a newly created job if a job is indeed created.
−−log Waits for the specified job to end. While waiting, the job logs will be shown unless the silent mode is set.
−−recurrence=CRONTABSTRING Creates recurring jobs, jobs that are repeated over and over again until they are manually deleted. See Crontab.
−−schedule=DATETIME The job will not be executed now but it is scheduled to execute later. The datetime string is sent to the backend, so all the formats are supported that is supported by the controller.
−−timeout=SECONDS Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, and the controller might ignore this value.
−−wait Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set.
JOB RELATED OPTIONS
−−job-id=ID The job ID of the job to handle or view.
−−from=DATE&TIME Controls the start time of the period that will be printed in the job list.
−−limit=NUMBER Limits the number of jobs printed.
−−offset=NUMBER Controls the relative index of the first item printed.
−−show-aborted Turn on the job state filtering and show jobs that are in an aborted state. This command-line option can be used while printing job lists together with the other --show-* options.
−−show-defined Turn on the job state filtering and show jobs that are in a defined state. This command-line option can be used while printing job lists together with the other --show-* options.
−−show-failed Turn on the job state filtering and show jobs that are failed. This command-line option can be used while printing job lists together with the other --show-* options.
−−show-finished Turn on the job state filtering and show jobs that are finished. This command-line option can be used while printing job lists together with the other --show-* options.
−−show-running Turn on the job state filtering and show jobs that are running. This command-line option can be used while printing job lists together with the other --show-* options.
−−show-scheduled Turn on the job state filtering and show jobs that are scheduled. This command-line option can be used while printing job lists together with the other --show-* options.
−−until=DATE&TIME Controls the end time of the period that will be printed in the job list.
−−log-format=FORMATSTRING The string that controls the format of the printed log and job messages. See Log Format Variables.
−−with-tags=LIST List of one of more strings separated by either , or ; to be used as a filter when printing information about jobs. When this command-line option is provided only the jobs that have any of the tags will be printed.
−−without-tags=LIST List of one of more strings separated by either , or ; to be used as a filter when printing information about jobs. When this command-line option is provided the jobs that have any of the tags will not be printed.

Crontab

Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The option argument is a crontab-style string defining the recurrence of the job.

The crontab string must have exactly five space-separated fields as follows:

Field Value
minute 0 – 59
hour 0 – 23
day of the month 1 – 31
month 1 – 12
day of the week 0 – 7

All the fields may be a simple expression or a list of simple expressions separated by a comma (,). So to clarify the fields are separated by space can contain subfields separated by a comma.

The simple expression is either a star (*) representing “all the possible values”, an integer number representing the given minute, hour, day or month (e.g. 5 for the fifth day of the month), or two numbers separated by a dash representing an interval (e.g. 8-16 representing every hour from 8 to 16). The simple expression can also define a “step” value, so, for example, */2 might stand for “every other hour” or 8-16/2 might stand for “every other hour between 8 and 16 or */2 might say “every other hour”.

Please check the crontab man page for more details.

Job List

Using the --list command-line option a detailed list of jobs can be printed (the --long option results in even more details). Here is an example of such a list:

$ s9s job --list
ID CID STATE    OWNER  GROUP  CREATED             RDY  TITLE
1  0   FINISHED pipas  users  2017-04-25 14:12:31 100% Create MySQL Cluster
2  1   FINISHED system admins            03:00:15 100% Removing Old Backups
Total: 2

The list contains the following fields:

Field Description
ID The numerical ID of the job. The --job-id command-line option can be used to pass such ID numbers.
CID The cluster ID. Most of the jobs are related to one specific cluster so those have a cluster ID in this field. Some of the jobs are not related to any cluster, so they are shown with cluster ID 0.
STATE The state of the job. The possible values are DEFINEDDEQUEUEDRUNNINGSCHEDULEDABORTEDFINISHED and FAILED.
OWNER The user name of the user who owns the job.
GROUP The name of the group owner.
CREATED The date and time show when the job was created. The format of this timestamp can be set using the --date-format command-line option.
RDY A progress indicator showing how many percent of the job was done. Please note that some jobs have no estimation available and so this value remains 0% for the entire execution time.
TITLE A short, human-readable description of the job.

Log Format Variables

The format string uses the % character to mark variable fields, flag characters as they are specified in the standard printf() C library functions and its own field name letters to refer to the various properties of the messages.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 characters wide and the + or - sign will always be printed with the number. Standard \ notation is also available, \n for example encodes a new-line character.

The properties of the message are encoded by letters. The in the %-5L for example the letter L encodes the “line-number” field, so the number of the source line that produced the message will be substituted. The program supports the following fields:

Variable Description
B The basename of the source file that produced the message.
C The creation date and time that mark the exact moment when the message was created. The format of the date&time substituted can be set using the --date-format command-line option.
F The name of the source file that created the message. This is similar to the B fields, but instead of the base name, the entire file name will be substituted.
I The ID of the message, a numerical ID that can be used as a unique identifier for the message.
J The Job ID.
L The line number in the source file where the message was created. This property is implemented mostly for debugging purposes.
M The message text.
S The severity of the message in text format. This field can be “MESSAGE”, “WARNING” or “FAILURE”.
T The creation time of the message. This is similar to the C field but shows only hours, minutes and seconds instead of the full date and time.
% The % character itself.

Examples

List jobs:

$ s9s job --list
10235 RUNNING dba 2017-01-09 10:10:17 2% Create Galera Cluster
10233 FAILED  dba 2017-01-09 10:09:41 0% Create Galera Cluster

The s9s client will send a job that will be executed in the background by cmon. It will print out the job ID, for example: “Job with ID 57 registered.”

It is then possible to attach to the job to find out the progress:

$ s9s job --wait --job-id=57

View job log messages of job ID 10235:

$ s9s job --log --job-id=10235

Delete the job that has the job ID 42:

$ s9s job --delete --job-id=42

Disable the job that has the job ID 102:

$ s9s job --disable --job-id=102

Enable the job that has the job ID 102:

$ s9s job --enable --job-id=102

Create a job that runs every 5 minutes and does nothing at all. This can be used for testing and demonstrating the recurring jobs without doing any significant or dangerous operations.

$ s9s job --success --recurrence="*/5 * * * *"

Clone job ID 14, run it in as a new job, and see the job messages:

$ s9s job --clone \
  --job-id=14 \
  --log
  --clone

Kill a running job with job ID 20:

$ s9s job --kill \
  --job-id=20

s9s-log

View logs.

Usage

s9s log {command} {options}

Commands

Name, shorthand Description
−−list-L Lists all log entries.

Options

Name, shorthand Description
CLUSTER RELATED OPTIONS
−−cluster-id=ID The ID of the cluster to check.
−−cluster-name=NAME The NAME of the cluster to check.
JOB RELATED OPTIONS
--limit=NUMBER Limits the number of log messages printed.
−−offset=NUMBER Controls the relative index of the first item printed.

Attention

The following log-related option has been deprecated since version 2.2.0 (Sept 2024):

  • --from=DATE&TIME
  • --until=DATE&TIME
  • --message-id=ID
  • --log-format=FORMATSTRING
  • --long

Examples

List log messages for Cluster ID 1:

$ s9s log \
    --list \
    --cluster-id=1

List log messages for Cluster ID 1 except the 10 last log records:

$ s9s log \
    --list \
    --cluster-id=1 \
    --offset=10

List only the last 20 log messages for Cluster ID 1:

$ s9s log \
    --list \
    --cluster-id=1 \
    --limit=20

List 20 log messages skipping the latest 20 log records for Cluster ID 1:

$ s9s log \
    --list \
    --cluster-id=1 \
    --limit=20 \
    --offset=20

s9s-maintenance

View and manipulate maintenance periods.

Usage

s9s maintenance {command} {options}

Commands

Name, shorthand Description
−−create Creates a new maintenance period.
−−current Prints the active maintenance for a cluster or for a host. Prints nothing if no maintenance period is active.
−−list-L Lists the registered maintenance period. See Maintenance List.
−−delete Deletes an existing maintenance period. The maintenance periods are identified by their UUID strings. The UUID strings by default, are shown in an abbreviated format. The full-length UUID strings will be shown when the command line option is provided. Deleting a maintenance period is also possible by providing only the first few characters of the UUID when these first characters are unique and enough to identify the maintenance period.
−−next Prints information about the next maintenance period for a cluster or host. Prints nothing if no maintenance is registered to be started in the future.

Options

Name, shorthand Description
−−begin=DATETIME A string representation of the date and time when the maintenance period will start.
−−cluster-id=ID The cluster for cluster maintenance.
−−end=DATETIME A string representation of the date and time when the maintenance period will end.
−−full-uuid Print the full UUID.
−−nodes=NODELIST The nodes for the node maintenance. See Node List.
−−reason=STRING The reason for the maintenance.
−−start=DATETIME A string representation of the date and time when the maintenance period will start. This option is deprecated, please use the --begin option instead.
−−uuid=UUID The UUID to identify the maintenance period.

Maintenance List

Using the --list and --long command-line options a detailed list of the registered maintenance periods can be printed:

$ s9s maintenance --list --long
ST UUID    OWNER  GROUP  START    END      HOST/CLUSTER  REASON
Ah a7e037a system admins 11:21:24 11:41:24 192.168.1.113 Rolling restart.
Total: 1

The list contains the following fields:

Field Description
ST The short status information, where at the first character position A stands for ‘active’ and - stands for ‘inactive’. In the second character position h stands for ‘host-related maintenance’ and c stands for ‘cluster-related maintenance’.
UUID The unique string that identifies the maintenance period. Normally only the first few characters of the UUID is shown, but if the --full-uuid the command-line option is provided the full-length string will be printed.
OWNER The name of the owner of the given maintenance period.
GROUP The name of the group owner of the maintenance period.
START The date and time when the maintenance period starts.
END The date and time when the maintenance period expires.
HOST/CLUSTER The name of the cluster or host under maintenance.
REASON A short human-readable description showing why maintenance is required.

Examples

Create a maintenance period for PostgreSQL node 10.35.112.21, starting at 05:44:55 AM for one full day (cmon expects UTC time to create a maintenance):

$ s9s maintenance \
      --create \
      --nodes=10.35.112.21:5432 \
      --begin=2024-05-15T05:44:55.000Z \
      --end=2024-05-16T05:44:55.000Z \
      --reason='Upgrading RAM' \
      --batch

Create a new maintenance period for 192.168.1.121 which shall start tomorrow and finish an hour later:

$ s9s maintenance --create \
      --nodes=192.168.1.121 \
      --begin="$(date --date='now + 1 day' --utc '+%Y-%m-%d %H:%M:%S')'" \
      --end="$(date --date='now + 1day + 1 hour' --utc '+%Y-%m-%d %H:%M:%S')" \
      --reason="Upgrading software."

List out all nodes that are under maintenance period:

$ s9s maintenance --list --long
ST UUID    OWNER GROUP START    END      HOST/CLUSTER REASON
-h 70346c3 dba   admin 07:42:18 08:42:18 10.0.0.209   Upgrading RAM
Total: 1

Delete a maintenance period for UUID 70346c3:

$ s9s maintenance --delete --uuid=70346c3

Check if there is any ongoing maintenance period for cluster ID 1:

$ s9s maintenance --current --cluster-id=1

Check the next maintenance period scheduled for node 192.168.0.227 for cluster ID 1:

$ s9s maintenance \
      --next \
      --cluster-id=1 \
      --nodes="192.168.0.227"

s9s-metatype

Lists meta-types supported by the controller.

Usage

s9s metatype {command} {options}

Commands

Name, shorthand Description
−−list-L Lists the names of the types the controller supports. See Property List.
−−list-cluster-types Lists the cluster types the controller supports. With the --long option also lists the vendors and the versions.
−−list-properties Lists the properties the given metatype has. Use the --type command-line option to pass the name of the metatype.

Options

Name, shorthand Description
−−type=TYPENAME The name of the type.

Property List

Using the --list-properties and --long command line options a detailed list of the metatype properties can be printed:

$ s9s metatype --list-properties --type=CmonUser --long
ST NAME          UNIT DESCRIPTION
rw email_address -    The email address of the user.
rw first_name    -    The first name of the user.
rw groups        -    The list of groups for the user.
rw job_title     -    The job title of the user.
r- last_login    -    The date&time of the last successful login.

The list contains the following fields:

Field Description
ST The short status information, where at the first character position r stands for ‘readable’ and - shows that the property is not readable by the client program. At the second character position w stands for ‘writable’ and - shows that the property is not writable by the client.
NAME The name of the property.
UNIT The unit in which the given property is measured (e.g. ‘byte’). This field shows a single - character if the unit is not applicable.
DESCRIPTION The short human-readable description of the property.

Examples

List the metatypes the controller supports:

$ s9s metatype --list

List a detailed list of the properties the CmonUser type has:

$ s9s metatype \
        --list-properties \
        --type=CmonUser \
        --long

List all cluster types currently managed by this controller:

$ s9s metatype \
        --list-cluster-types \
        --long

s9s-node

View and handle nodes.

Usage

s9s node {command} {options}

Commands

Name, shorthand Description
−−change-config Changes configuration values for the given node.
−−enable-binary-logging Creates a job to enable binary logging on a specific node. Not all clusters support this feature (MySQL does). One needs to enable binary logging in order to set up a cluster to cluster replication.
−−list-L List nodes. Default to all clusters.
−−list-config Lists the configuration values for the given node.
−−pull-config Copy the configuration file(s) from the node to the local computer. Use the --output-dir to control where the files will be created.
--push-config Copy the configuration file(s) to a node.
--register Registers a node that already is working.
−−restart Restarts the node. This means the process that provides the main functionality on the node (e.g. the MySQL daemon on a Galera node) will be stopped then start again.
−−set Sets various properties of the specified node/host.
−−set-config Changes configuration values for the given node.
−−set-read-only Creates a job that sets the node to read-only mode. Please note that not all cluster types support read-only mode.
−−set-read-write Creates a job that sets the node to read-write mode if it was previously set to read-only mode. Please note that not all cluster types support read-only mode.
−−start Starts the node. This means the process that provides the main functionality on the node (e.g. the MySQL daemon on a Galera node) will be started.
−−stat Prints detailed node information. It can be used in conjunction with --graph to produce statistical data. See Graph Options.
−−stop Stops the node. This means the process that provides the main functionality on the node (e.g. the MySQL daemon on a Galera node) will be stopped.
−−unregister Unregisters the node from ClusterControl.

Options

Name, shorthand Description
−−cluster-id=ID-i The ID of the cluster in which the node is.
−−cluster-name=NAME-n Name of the cluster to list.
−−nodes=NODELIST The nodes to list or manipulate. See Node List.
−−node-format=FORMAT The format string used to print nodes.
−−opt-group=GROUP The configuration option group.
−−opt-name=NAME The name of the configuration option.
−−opt-value=VALUE The value of the configuration option.
−−output-dir=DIR The directory where the files are created.
−−graph=GRAPH_NAME The name of the graph to show. See Graph Options.
−−begin=TIMESTAMP The start of the graph interval.
−−end=TIMESTAMP The end of the graph interval.
−−force Force the execution of potentially dangerous operations like restarting a master node in a MySQL Replication cluster.
−−begin=TIMESTAMP The start time of the graph (the X-axis).
−−density If this option is provided will be a probability density function (or histogram) instead of a timeline. The X-axis shows the measured values (e.g. MByte/s) while the Y-axis shows how many percent of the measurements contain the value. If for example, the CPU usage is between 0% and 1% at 90% of the time the graph will show a 90% bump at the lower end.
−−end=TIMESTAMP The end of the graph.
−−node-format[=FORMATSTRING] The string that controls the format of the printed information about the nodes. See Node Format.
−−properties=ASSIGNMENT One or more assignments specifying property names and values. The assignment operator is the = character (e.g. --properties='alias="newname"'), multiple assignments are separated by the semicolon ;.
−−output-dir=DIRECTORY The directory where the output files will be created on the local computer.

Graph Options

When providing a valid graph name together with the --stat option a graph will be printed with statistical data. Currently, the following graphs are available:

Option Description
cpughz The graph will show the CPU clock frequency measured in GHz.
cpuload Shows the average CPU load of the host computer.
cpusys Percent of time the CPU spent in kernel mode.
cpuidle Percent of time the CPU is idle on the host.
cpuiowait Percent of time the CPU is waiting for IO operations.
cputemp The temperature of the CPU measured in degree Celsius. Please note that to measure the CPU temperature some kernel module might be needed (e.g. it might be necessary to run sudo modprobe coretemp). On multiprocessor systems, the graph might show only the first processor.
cpuuser Percent of time the CPU is running user space programs.
diskfree The amount of free disk space measured in GBytes.
diskreadspeed Disk read speed measured in MBytes/sec.
diskreadwritespeed Disk read and write speed measured in MBytes/sec.
diskwritespeed Disk write speed measured in MBytes/sec.
diskutilization The bandwidth utilization for the device in percent.
memfree The amount of the free memory measure in GBytes.
memutil The memory utilization of the host measured in percent.
neterrors The number of receive and transmit errors on the network interface.
netreceivedspeed Network read speed in MByte/sec.
netreceiveerrors The number of packets received with error on the given network interface.
nettransmiterrors The number of packets failed to transmit.
netsentspeed Network write speed in MByte/sec.
netspeed Network read and write speed in MByte/sec.
sqlcommands Shows the number of SQL commands executed measured in 1/s.
sqlcommits The number of commits measured in 1/s.
sqlconnections Shows the number of SQL connections.
sqlopentables The number of open tables at any given moment.
sqlqueries The number of SQL queries in 1/s.
sqlreplicationlag Replication lag on the SQL server.
sqlslowqueries The number of slow queries in 1/s.
swapfree The size of the free swap space measured in GBytes.

Node List

Using the --list and --long command-line options a detailed list of the nodes can be printed. Here is an example of such a list:

$ s9s node --list --long '192.168.1.1*'
STAT VERSION CID CLUSTER HOST PORT COMMENT
poM- 9.6.2 1 ft_postgresql_11794 192.168.1.117 8089 Up and running
coC- 1.4.2 1 ft_postgresql_11794 192.168.1.127 9555 Up and running
Total: 3
Note

The list in the example is created using a filter (that is ‘192.168.1.1*’ in the example). The last line shows a 3 as the total, the number of nodes maintained by the controller, but only two of the nodes are printed in the list because of the filter.

The list contains the following fields:

Field Description
STAT
  • Nodetype
    This is the type of node. It can be c for controller, g for Galera node, x for MaxScale node, k for Keepalived node, p for PostgreSQL, m for Mongo, e for MemCached, y for ProxySql, h for HaProxy, b for PgBouncer, B for PgBackRest, t for PBMAgent, a for Garbd, r for group replication host, A for cmon agent, P for Prometheus, s for generic MySQL nodes, S for Redis sentinel, R for Redis, E for Elasticsearch, and ? for unknown nodes.
  • Hoststatus
    The status of the node. It can be o for online, l for off-line, f for failed nodes, r for nodes performing recovery, - for nodes that are shut down and ? for nodes in an unknown state.
  • Role
    This field shows the role of the node in the cluster. This can be M for primary, S for replica, U for multi (primary and replica), C for controller, V for backup verification node, A for arbiter, R for backup repository host, D for Elasticsearch data host c for Elasticsearch coordinator only host and - for everything else.
  • Maintenance
    This field shows if the node is in maintenance mode. The character is M for nodes in maintenance mode and - for nodes that are not in maintenance mode.
VERSION This field shows the version string of the software that provides the service represented in the given line.
CID The cluster ID of the cluster that holds the node as a member. Every node belongs to exactly one cluster.
CLUSTER The name of the cluster that holds the node as a member.
HOST The hostname of the host. This can be a real DNS hostname, the IP address, or the Cmon alias name of the node depending on the configuration and the command line options. The cluster is usually configured to use IP addresses (the Cmon configuration file contains IP addresses) so this field usually shows IP addresses.
PORT The IP port on which the node accepts requests. The same DNS hostname or IP address can be added multiple times to the same or to multiple clusters, but the host:port pair must be unique. In other words, the same host with the same port can not be added to the same Cmon controller twice. Since the hostname:port pair is unique the nodes are identified by this and every line of the node list is representing a hostname:port node. There is one exception to this rule: the Cmon Controller can manage multiple clusters and so be a part of more than one cluster with the same hostname and port.
COMMENT A short human-readable description that the Cmon Controller sets automatically to describe the host state. A single - character is shown if the controller did not set the message.

Node Format

The string that controls the format of the printed information about the nodes. When this command-line option is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the nodes.

The %+12i format string for example has the +12 flag characters in it with the standard meaning the field will be 12 character-wide and the + or - sign will always be printed with the number. The properties of the node are encoded by letters. In the %16D for example, the letter D encodes the “data directory” field, so the full path of the data directory on the node will be substituted. Standard \ notation is also available, \n for example encodes a new-line character.

The s9s-tools support the following fields:

Field Description
A The IP address of the node.
a Maintenance mode flag. If the node is in maintenance mode a letter M, otherwise -.
b The master host for this slave if it is applicable.
C The configuration file for the most important process on the node (e.g. the configuration file of the MySQL daemon on a Galera node).
c The total number of CPU cores in the host. Please note that this number may be affected by hyper-threading. When a computer has 2 identical CPUs, with four cores each and uses 2x hyper-threading it will count as 2x4x2 = 16.
D The data directory of the node. This is usually the data directory of the SQL server.
d The PID file on the node.
G The name of the group owner of the node.
g The log file on the node.
h The CDT path of the node.
I The numerical ID of the node.
i The total number of monitored disk devices (partitions) in the cluster.
k The total number of disk bytes found on the monitored devices in the node. This is a double-precision floating-point number measured in Terabytes.
L The replay location. This field currently only has valid value in PostgreSQL clusters.
l The received location. This field currently only has valid value in PostgreSQL clusters.
M A message, describing the node’s status in a human-readable format.
m The total memory size found in the host, measured in GBytes. This value is represented by a double-precision floating-point number, so formatting it with precision (e.g. %6.2m) is possible. When used with the f modifier (e.g. %6.2fm) this reports the free memory, the memory that available for allocation, used for cache or used for buffers.
N The name of the node. If the node has an alias that is used, otherwise the name of the node is used. If the node is registered using the IP address, the IP address is the name.
n The total number of monitored network interfaces in the host.
O The user name of the owner of the cluster that holds the node.
o The name and version of the operating system together with the codename.
P The port on which the most important service is waiting for requests.
p The PID (process ID) on the node that presents the service (e.g. the PID of the MySQL daemon on a Galera node).
R The role of the node (e.g. “controller”, “master”, “slave” or “none”).
r The work read-only or read-write indicating if the server is in read-only mode or not.
S The status of the host (e.g. CmonHostUnknown, CmonHostOnline, CmonHostOffLine, CmonHostFailed, CmonHostRecovery, CmonHostShutDown).
s The list of slaves of the given host in one string.
T The type of the node, e.g. “controller”, “galera”, “postgres”.
t The total network traffic (both received and transmitted) measured in MBytes/seconds.
U The number of physical CPUs on the host.
u The CPU usage percent found on the host.
V The version string of the most important software (e.g. the version of the PostgreSQL installed on a PostgreSQL node).
v The ID of the container/VM in “CLOUD/ID” format. The string if no container ID is set for the node.
w The total swap space found in the host measured in GigaBytes. With the f modifier (e.g. %6.2fw) this reports the free swap space in GigaBytes.
Z The name of the CPU model. Should the host have multiple CPUs, this will return the model name of the first CPU.
% The % character itself.

Examples

List all nodes:

$ s9s node --list --long
ST  VERSION                  CID CLUSTER        HOST       PORT COMMENT
go- 10.1.22-MariaDB-1~xenial   1 MariaDB Galera 10.0.0.185 3306 Up and running
co- 1.4.1.1856                 1 MariaDB Galera 10.0.0.205 9500 Up and running
go- 10.1.22-MariaDB-1~xenial   1 MariaDB Galera 10.0.0.209 3306 Up and running
go- 10.1.22-MariaDB-1~xenial   1 MariaDB Galera 10.0.0.82  3306 Up and running
Total: 4

Print the configuration for a node:

$ s9s node --list-config --nodes=10.0.0.3
...
mysqldump   max_allowed_packet                     512M
mysqldump   user                                   backupuser
mysqldump   password                               nWC6NSm7PnnF8zQ9
xtrabackup  user                                   backupuser
xtrabackup  password                               nWC6NSm7PnnF8zQ9
MYSQLD_SAFE pid-file                               /var/lib/mysql/mysql.pid
MYSQLD_SAFE basedir                                /usr/
Total: 71

The following example shows how a node in a given cluster can be restarted. When this command executed a new job will be created to restart a node. The command-line tool will stop and show the job messages until the job is finished:

$ s9s node \
        --restart \
        --cluster-id=1 \
        --nodes=192.168.1.117 \
        --log

Change a configuration value for a PostgreSQL server:

$ s9s node \
        --change-config \
        --nodes=192.168.1.115 \
        --opt-name=log_line_prefix \
        --opt-value='%m '

Push a configuration option inside my.cnf (max_connections=500) on node 10.0.0.3:

$ s9s node \
        --change-config \
        --nodes=10.0.0.3 \
        --opt-group=mysqld \
        --opt-name=max_connections \
        --opt-value=500

Import two existing Keepalived nodes (192.168.20.56 and 192.168.20.57) where the virtual IP is 192.168.20.59 into cluster ID 13:

# import primary keepalived
$ s9s node \
        --register \
        --nodes="keepalived://192.168.20.56" \
        --virtual-ip=192.168.20.59 \
        --cluster-id=13 \
        --wait

# import secondary keepalived
$ s9s node \
        --register \
        --nodes="keepalived://192.168.20.57" \
        --virtual-ip=192.168.20.59 \
        --cluster-id=13 \
        --wait

Listing the Galera hosts. This can be done by filtering the list of nodes by their properties:

$ s9s node \
        --list \
        --long \
        --properties="class_name=CmonGaleraHost"

Create a set of graphs, one for each node shown in the terminal about the load on the hosts. If the terminal is wide enough the graphs will be shown side by side for a compact view:

$ s9s node \
        --stat \
        --cluster-id=1 \
        --begin="08:00" \
        --end="14:00" \
        --graph=load

The density function can also be printed to show what were the typical values for the given statistical data. The following example shows what was the typical values for the user mode CPU usage percent:

$ s9s node \
        --stat \
        --cluster-id=2 \
        --begin=00:00 \
        --end=16:00 \
        --density \
        --graph=cpuuser

The following example shows how a custom list can be created to show some information about the CPU(s) in some specific hosts:

$ s9s node \
        --list \
        --node-format="%N %U CPU %c Cores %6.2u%% %Z\n" 192.168.1.191 192.168.1.195
192.168.1.191 2 CPU 16 Cores 22.54% Intel(R) Xeon(R) CPU L5520 @ 2.27GHz
192.168.1.195 2 CPU 16 Cores 23.12% Intel(R) Xeon(R) CPU L5520 @ 2.27GHz

The following list shows some information about the memory, the total memory and the memory available for the applications to allocate (including cache and buffer with the free memory):

$ s9s node \
        --list \
        --node-format="%4.2m GBytes %4.2fm GBytes %N\n"
16.00 GBytes 15.53 GBytes 192.168.1.191
47.16 GBytes 38.83 GBytes 192.168.1.127

Set a node to read-write mode if it was previously set to read-only mode:

$ s9s node \
        --set-read-write \
        --cluster-id=1 \
        --nodes=192.168.0.78 \
        --log

Copy configuration file(s) from a PostgreSQL server 192.168.0.232 into the localhost:

$ s9s node \
        --pull-config \
        --nodes="192.168.0.232" \
        --output-dir="tmp"

s9s-process

View processes running on nodes.

Usage

s9s process {command} {options}

Commands

Name, shorthand Description
−−list-L Lists the processes.
−−list-digests Prints statement digests together with statistical data showing how long it took them to be executed. The printed list will not contain individual SQL statements but patterns that collect multiple statements of similar form merged into groups by the similarities.
−−list-queries Lists the queries, internal SQL processes of the cluster.
−−top-queries Continuously showing the internal SQL processes in an interactive UI, similar ClusterControl GUI Top Queries page.
−−top Continuously showing the processes in an interactive UI like the well-known top utility. Please note that if the terminal program supports the UI can be controlled with the mouse.

Options

Name, shorthand Description
−−cluster-id=ID The ID of the cluster to show.
−−client=PATTERN Shows only the processes that originate from clients that match the given pattern.
−−limit=N Limits the number of processes shown in the list.
−−server=PATTERN Shows only the processes that are executed by servers that match the given pattern.
−−sort-by-memory Sorts the processes by resident memory size instead of CPU usage.
−−sort-by-time Sorts the SQL queries by their runtime. The longer running queries are going to be on top.
−−update-freq=INTEGER Update frequency for screen refresh in seconds.

Examples

Continuously print aggregated view of processes (similar to top output) of all nodes for cluster ID 1:

$ s9s process --top --cluster-id=1

List aggregated view of processes (similar to ps output) of all nodes for cluster ID 1:

$ s9s process --list --cluster-id=1

Print out aggregated digested SQL statements on all MySQL nodes in cluster ID 1:

$ s9s process \
        --list-digests \
        --cluster-id=1 \
        --human-readable \
        --limit=10 \
        '*:3306'

Print aggregated list of database top queries which contains the string “INSERT” on all nodes in cluster ID 1 with refresh rate every 1 second:

$ s9s process \
        --top-queries \
        --cluster-id=1 \
        --update-freq=1 \
        'INSERT*'

Print all database queries on cluster ID 1 that is coming from a client with IP address 192.168.0.127 which having only the “INSERT” string:

$ s9s process \
        --list-queries \
        --cluster-id=1 \
        --client='192.168.0.127:*' \
        'INSERT*'

Print all database queries on cluster ID 1 that are reaching the database server 192.168.0.81:

$ s9s process \
        --list-queries \
        --cluster-id=1 \
        --server='192.168.0.81:*'

s9s-replication

Manage database replication related functions.

Note

Only applicable for supported database clusters namely MySQL/MariaDB Replication and PostgreSQL Streaming Replication.

Usage

s9s replication {command} {options}

Commands

Name, shorthand Description
−−failover Takes over the role of master from a failed master.
−−list List the replication links.
−−promote Promotes a slave to become a master.
−−stage Rebuilds or stages a replication slave.
−−start Starts a replication slave previously stopped using the --stop option.
−−stop Make the slave stop replicating. This option will create a job that does not stop the server but stops the replication on it.

Options

Name, shorthand Description
−−fail-stop-slave When this option is specified slave node will be stopped.
−−link-format=FORMATSTRING The format string controls the format of the printed information about the replication links. See Link Format.
−−master=NODE The replication master.
−−remote-cluster-id=ID Remote cluster ID for the c2c replication.
−−replication-master=NODE This is the same as the --master option.
−−slave=NODE The replication slave.
−−replication-slave=NODE This is the same as the --slave option.

The format string controls the format of the printed information about the links. When this command-line option is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the replication.

The %+12p format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 characters wide and the + or - the sign will always be printed with the number. The properties of the links are encoded by letters. The in the %4p for example the letter p encodes the slave port field, so the port number of the slave node will be substituted. Standard \ the notation is also available, \n for example encodes a new-line character.

The following example prints out a customized format to show the replication link among nodes together with the position of master/slave in the replication log for PostgreSQL streaming replication, MySQL GTID and MariaDB GTID:

$ s9s replication \
        --list \
        --long \
        --link-format="%16h %4p <- %H %2P %o %O\n"
192.168.0.81 5432 <- 192.168.0.83 5432 0/71001BA8
192.168.0.82 5432 <- 192.168.0.83 5432 0/71001BA8
192.168.0.42 3306 <- 192.168.0.41 3306 dca7e205-90db-11ea-9143-5254008afee6:1-10 3157
192.168.0.43 3306 <- 192.168.0.41 3306 dca7e205-90db-11ea-9143-5254008afee6:1-10 3157
192.168.0.92 3306 <- 192.168.0.91 3306 0-54001-49 10285
192.168.0.93 3306 <- 192.168.0.91 3306 0-54001-49 10285

The s9s-tools support the following fields:

Field Description
c The cluster ID of the cluster where the slave node can be found.
C The master cluster-ID property of the slave host. This shows in which cluster the master of the represented replication link can be found.
d This format specifier denotes the “seconds behind the master” property of the slave.
h The hostname of the slave node in the link.
H The hostname of the master node in the link.
o The position of the slave in the replication log.
O The position of the master in the replication log.
p The port number of the slave.
P The port number of the master node.
s A short string representing the link status, e.g. “Online” when everything is ok.
m A slightly longer, human-readable string representing the state of the link. This is actually the slave_io_state property of the slave node.

Examples

Print all database replication links for clusters that fall under the replication category:

$ s9s replication --list

Promote a slave 192.168.0.164 to become a new master for a database cluster named “MySQL Rep 5.7 – Production”:

$ s9s replication \
        --promote \
        --cluster-name="MySQL Rep 5.7 - Production" \
        --slave="192.168.0.164:3306" \
        --wait

Rebuild the replication slave of 192.168.0.83 from the master, 192.168.0.76, and tag the job as “stage” for cluster ID 1:

$ s9s replication \
        --stage \
        --cluster-id=1 \
        --job-tags="stage" \
        --slave="192.168.0.83:3306" \
        --master="192.168.0.76:3306" \
        --wait

Stop the replication slave on node 192.168.0.80:

$ s9s replication \
        --stop \
        --cluster-id=1 \
        --slave="192.168.0.80:3306" \
        --wait

Start the replication slave on node 192.168.0.80:

$ s9s replication \
        --start \
        --cluster-id=1 \
        --slave="192.168.0.80:3306" \
        --wait

s9s-report

Manage operational reports.

Usage

s9s report {command} {options}

Commands

Name, shorthand Description
−−cat Prints the report text to the standard output.
−−create Creates and prints a new report. Please provide the --type command-line option with the report type (that was chosen from the template list printed with the --list-templates option) and the --cluster-id with the cluster ID.
−−delete Deletes a report that already exists. The report ID should be used to specify which report deleting.
−−list Lists the reports that already created.
−−list-templates Lists the report templates showing what kind of reports can be created. This command-line option is usually used together with the --long option.

Options

Name, shorthand Description
−−cluster-id=ID This command-line option passes the numerical ID of the cluster.
−−report-id=ID This command-line option passes the numerical ID of the report to manage. When a report is deleted (--delete option) or printed (--cat option) this option is mandatory.
−−type=NAME This command-line option controls the type of report. Use the --list-templates to list what types are available.

Examples

Lists already created reports for cluster ID 1:

$ s9s report \
        --list \
        --long \
        --cluster-id=1

Print the report ID 1 for cluster ID 1:

$ s9s report \
        --cat \
        --report-id=1 \
        --cluster-id=1

Create a database growth report for cluster ID 1:

$ s9s report \
        --create \
        --type=dbgrowth \
        --cluster-id=1

Delete an operation report with report ID 11:

$ s9s report \
        --delete \
        --report-id=11

Print the supported templates that can be used with type value when creating a new report:

$ s9s report \
        --list-templates \
        --long

s9s-script

Manage and execute Advisor scripts.

Usage

s9s script {command} {options}

Commands

Name, shorthand Description
−−execute Executes a script from a local file.
−−system Executes a shell command or an entire shell script on the nodes of the cluster showing the output of the command as job messages. See Shell Commands.
−−tree Print the names of the scripts stored on the controller in tree format.

Options

Name, shorthand Description
−−cluster-id=ID The target cluster-ID.
−−cluster-name=NAME-n The target cluster name.
--shell-command Executes a shell command with --system flag. See Shell Commands.
--timeout The timeout value when executing the command/script in seconds, default is 10 seconds. If the command/script keeps running, the job will be failed and the execution of the command(s) is aborted.

Shell Commands

If the --shell-command option is provided, the options argument will be executed. If a file name is provided as a command-line argument the content of the file will be executed as a shell script. If neither found in the command line the s9s will read its standard input and execute the lines found as a shell script on the nodes (this is the way to implement self-contained remote executed shell scripts using a shebang).

Please note that this will be a job and it can be blocked by other jobs running on the same cluster.

$ s9s script \
        --system \
        --log \
        --cluster-id=1 \
        --shell-command="df -h"

The command/script will be executed using the pre-configured OS user of the cluster. If that user has no superuser privileges the Sudo utility can be used in the command or the script to gain superuser privileges.

By default the command/script will be executed on all nodes, all members of the cluster except the controller. This can be changed by providing a node list using the --nodes command-line option.

To execute shell commands the authenticated user has to have executed privileges on the nodes. If the executive privilege is not granted the credentials of another Cmon User can be passed in the command line or the privileges can be changed (see s9s-tree for details about owners, privileges, and ACLs).

$ s9s script \
        --system \
        --cmon-user=system \
        --password=mysecret \
        --log \
        --cluster-id=1 \
        --timeout=2 \
        --nodes='192.168.0.127;192.168.0.44' \
        --shell-command="df -h"

Please note that the job will by default has a 10 seconds timeout, so if the command/script keeps running the job will be failed and the execution of the command(s) aborted. The timeout value can be set using the --timeout command-line option.

Examples

Print the scripts available for cluster ID 1:

$ s9s script --tree --cluster-id=1

Execute a script called test_script.sh on all nodes in the cluster with ID 1:

$ s9s script \
        --system \
        --log \
        --log-format="%M\n" \
        --timeout=20 \
        --cluster-id=1 \
        test_script.sh

Get the disk space summary of host 192.168.0.127 and 192.168.0.44 in cluster ID 1:

$ s9s script \
        --system \
        --log \
        --cluster-id=1 \
        --timeout=2 \
        --nodes='192.168.0.127;192.168.0.44' \
        --shell-command="df -h"

s9s-server

Provisions and manages virtualization host to be used to host database containers.

Note

The supported virtualization platforms are LXC/LXD and Amazon Web Service EC2 instances. Docker is not supported at the moment.

LXC Containers

Handling lxc containers is a new feature added to the CMON Controller and the s9s command-line tool. The basic functionality is available and tested, containers can be created, started, stopped, deleted, and even creating containers on the fly while installing clusters or cluster nodes is possible.

For the lxc containers, one needs a container server, a computer that has the lxc software installed and configured, and of course needs a proper account to access the container server from the CMON Controller. One can set up an lxc container server in two easy and one not so easy steps:

  1. Install a Linux server and set it up so that the root user can ssh in from the Cmon Controller with a key, without a password. Creating such access for the superuser is of course not the only way, it is just the easiest.
  2. Register the server as a container server on the CMON Controller by issuing the s9s server --register --servers="lxc://IP_ADDRESS" command. This will install the necessary software and register the server as a content server to be used later.
  3. The hard part is the network configuration on the container server. Most of the distributions by default have a network configuration that provides a local (host only) IP address for the newly created containers. In order to provide a public IP address for the containers, the container server must have some sort of bridging or NAT configured.

A possible way to configure the network for public IP is described in this blog, Converting eth0 to br0 and getting all your LXC or LXD onto your LAN.

CMON-Cloud Virtualization

The cmon-cloud containers are an experimental virtualization backend currently added to the CMON Controller as a brand new feature.

Usage

s9s server {command} {options}

Commands

Name, shorthand Description
−−add-acl Adds a new ACL entry to the server or modifies an existing ACL entry.
−−create Creates a new server. If this option is provided the controller will use SSH to discover the server and install the necessary software packages and modify the configuration if needed so that the server can host containers.
−−get-acl List the ACL of a server.
−−list List the registered servers.
−−list-disks List disks found in one or more servers.
−−list-images List the images available on one or more servers. With the --long command-line option a more detailed list is available.
−−list-memory List memory modules from one or more servers.
−−list-nics List network controllers from one or more servers.
−−list-partitions List partitions from multiple servers.
−−list-processors List processors from one or more servers.
−−list-regions Prints the list of regions the server(s) support together with some important information (e.g. if the controller has credentials to use those regions or not).
−−list-subnets List all the subnets that exist on one or more servers.
−−list-templates Lists the supported templates. Various virtualization technologies handle templates differently, some even use different terminology (for example “size” is one such synonym). In the case of LXC, any stopped container can be used as a template for new containers to be created.
−−register Registers an existing container server. If this command-line option is provided the controller will register the server to be used as a container server later. No software packages are installed or configurations changed.
−−start Boot up a server. This option will try to start up a server that is physically turned off (using e.g. the wake-on-LAN feature).
−−stat Prints details about one or more servers.
−−stop Shuts down and power off a server. When this command-line option is provided the controller will run the shutdown program on the server.
−−unregister Unregisters a container server, and simply removes it from the controller.

Options

Name, shorthand Description
−−log If the s9s application created a job and this command-line option is provided it will wait until the job is executed. While waiting the job logs will be shown unless the silent mode is set.
−−recurrence=CRONTABSTRING This option can be used to create recurring jobs, jobs that are repeated over and over again until they are manually deleted. Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The option’s argument is a crontab-style string defining the recurrence of the job. See Crontab.
−−schedule=DATETIME The job will not be executed now but it is scheduled to execute later. The DateTime string is sent to the backend, so all the formats are supported by the controller.
−−timeout=SECONDS Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, the controller might ignore this value.
−−wait If the application created a job (e.g. to create a new cluster) and this command-line option is provided the s9s program will wait until the job is executed. While waiting a progress bar will be shown unless the silent mode is set.
−−acl=ACLSTRING The ACL entry to set.
−−os-key-file=PATH The SSH key file to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file--os-password--os-user) the controller will try to log in with the default settings.
−−os-password=PASSWORD The SSH password to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file--os-password--os-user) the controller will try to log in with the default settings.
−−os-user=USERNAME The SSH username to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file--os-password--os-user) the controller will try to log in with the default settings.
−−refresh Do not use cached data, collect information.
−−servers=LIST List of servers.

Server List

Using the --list and --long command line options a detailed list of the servers can be printed. Here is an example of such a list:

$ s9s server --list --long
PRV VERSION #C OWNER GROUP     NAME        IP           COMMENT
lxc 2.0.8    5 pipas testgroup core1       192.168.0.4  Up and running.
lxc 2.0.8    5 pipas testgroup storage01   192.168.0.17 Up and running.
Total: 2 server(s)

The list contains the following fields:

Field Description
PRV The name of the provider software. The software that will handle containers or virtual machines on the server. One server can have only one such system, but multiple servers can be registered using one physical computer.
VERSION The version of the provider software.
#C The number of containers/virtual machines currently hosted by the server.
OWNER The owner of the server object.
GROUP The group owner of the server object.
NAME The hostname of the server.
IP The IP address of the server.
COMMENT A human-readable description of the server and its state.

Examples

  • Register a virtualization host:

    $ s9s server --register --servers=lxc://storage01
    
  • Check the list of virtualization hosts:

    $ s9s server --list --long
    
  • Create a virtualization server with an operating system username and password to be used to host containers. The controller will try to access the server using the specified credentials:

    $ s9s server \
            --create \
            --os-user=testuser \
            --os-password=passw0rd \
            --servers=lxc://192.168.0.250 \
            --log
    

s9s-tree

Create, manage and manipulate CMON Directory Tree (CDT).

Usage

s9s tree {command} {options}

Commands

Name, shorthand Description
−−access Check the access rights for the authenticated user to the given CDT entry. This main option is made to be used in shell scripts, the exit code of the s9s program shows if the user has the privileges specified in the command line.
−−add-acl Add an access control list (ACL) entry to an object in the tree. Overwrites the ACL if the given type of ACL already exists.
−−add-tag This main option can be used to add a new tag to the tag list of an existing object.
−−cat This option can be used to print the content of a CDT file entry similar way the cat standard utility is used to print normal files.
−−chown Change the ownership of an object. The new owner (and optionally the group owner) should be passed through the --owner command-line option.
−−delete Remove CDT entries.
−−get-acl Print the ACL of a CDT entry.
−−list Print the Cmon Directory Tree in list format.
−−mkdir Create a new directory in the tree.
−−move Move an object to a new location in the tree or rename the entry. If the target contains the / character, it is assumed to be a directory and the source entry will be moved to that directory with its name unchanged. If the target contains no, it is assumed to be a new name and the source entry will be renamed while kept in the same directory.
−−remove-acl Remove an ACL entry from the ACL of an object.
−−remove-tag Remove a tag from the tag list of an existing object.
−−rmdir Remove an empty directory from the tree.
−−save Saves data (text) into an existing CDT entry that has the proper type (the type is a file).
−−touch Creates a CDT entry that is a file.
−−tree Print the tree in its original tree format.
−−watch Opens an interactive UI to watch and manipulate the CDT filesystem and its entries.

Options

Name, shorthand Description
−−acl=ACLSTRING An ACL entry in string format. See ACL Text Forms.
−−all The CDT entries that have a name starting with. considered to be hidden entries. These are only printed if the --all command-line option is provided.
−−owner=USER[:GROUP] The user name and group name of the owner.
−−recursive The print also the sub-items of the tree. The --chown will change the ownership for sub-items too. Please note that the --tree is always recursive, no need for this command-line option there.
−−refresh Recollect the data.
−−tag=TAGSTRING Specify one single tag when adding or removing tags of a tag list that belongs to an object.

ACL Text Forms

A long and a short text form for representing ACLs is defined. In both forms, ACL entries are represented as three colon-separated fields – an ACL entry tag type, an ACL entry qualifier, and discretionary access permissions. The first field contains one of the following entry tag type keywords:

Tag type Description
user A user ACL entry specifies the access granted to either the file owner (entry tag type ACL_USER_OBJ) or a specified user (entry tag type ACL_USER).
group A group ACL entry specifies the access granted to either the filegroup (entry tag type ACL_GROUP_OBJ) or a specified group (entry tag type ACL_GROUP).
mask A mask ACL entry specifies the maximum access which can be granted by any ACL entry except the user entry for the file owner and the other entry (entry tag type ACL_MASK).
other Another ACL entry specifies the access granted to any process that does not match any user or group ACL entries (entry tag type ACL_OTHER).

The second field contains the user or group identifier of the user or group associated with the ACL entry for entries of entry tag type ACL_USER or ACL_GROUP, and is empty for all other entries. A user identifier can be a user name or a user ID number in decimal form. A group identifier can be a group name or a group ID number in decimal form.

The third field contains discretionary access permissions. The read, write and search/execute permissions are represented by the r, wand x characters, in this order. Each of these characters is replaced by the character to denote that permission is absent in the ACL entry. When converting from the text form to the internal representation, permissions that are absent need not be specified.

White space is permitted at the beginning and end of each ACL entry, and immediately before and after a field separator (the colon character).

LONG TEXT FORM

The long text form contains one ACL entry per line. In addition, a number sign (#) may start a comment that extends until the end of the line. If an ACL_USER, ACL_GROUP_OBJ or ACL_GROUP ACL entry contains permissions that are not also contained in the ACL_MASK entry, the entry is followed by a number sign, the string “effective:”, and the effective access permissions defined by that entry. This is an example of the long text form:

user::rw-
user:lisa:rw-         #effective:r--
group::r--
group:toolies:rw-     #effective:r--
mask::r--
other::r--

SHORT TEXT FORM

The short text form is a sequence of ACL entries separated by commas and is used for input. Comments are not supported. Entry tag type keywords may either appear in their full unabbreviated form, or in their single letter abbreviated form. The abbreviation for a user is u, the abbreviation for a group is g, the abbreviation for the mask is m, and the abbreviation for others is o.

The permissions may contain at most one each of the following characters in any order: r, w, x. These are examples of the short text form:

u::rw-,u:lisa:rw-,g::r--,g:toolies:rw-,m::r--,o::r--
g:toolies:rw,u:lisa:rw,u::wr,g::r,o::r,m::r

For more information, check out the POSIX Access Control Lists documentation by running man acl command.

Examples

List out all CDT objects in a tree structure view:

$ s9s tree \
        --tree --all

Add an access control list to allow a group called “users” to have read (access the cluster) and write (make a modification to the cluster) to a cluster named “galera_001”:

$ s9s tree \
        --add-acl \
        --acl="group:users:rw-" \
        galera_001

Tag a cluster with a string “Production” for a cluster named PostgreSQL_Cluster_001 which located at /PostgreSQL Cluster 1 (as a CDT object):

$ s9s tree \
        --add-tag \
        --tag="Production" \
        "/PostgreSQL Cluster 001"

Change the ownership of a cluster called “MariaDB 2 QA” (located at /MariaDB 2 QA as a CDT object) to user john and group DBA:

$ s9s tree \
        --chown \
        --owner=john:DBA \
        --recursive \
        "/MariaDB 2 QA"

Create a new directory inside the CDT tree:

$       s9s tree \
        --mkdir \
        /home/kirk

List out all objects under /MariaDB_Cluster_1 including the hidden objects:

$ s9s tree \
        --list \
        --long \
        --recursive \
        --all \
        --full-path \
        /MariaDB_Cluster_1

s9s-user

Manage users.

Usage

s9s user {command} {options}

Commands

Name, shorthand Description
−−add-key Registers a new public key for an existing user. After the command, the user will be able to authenticate with the private part of the registered public key, no password will be necessary.
−−add-to-group Adds the user to a group.
−−change-password Modifies the password of an existing user. The password is not a simple property, so it can not be changed using the --set option, this special command-line option has to be used.
−−create Registers a new user (create a new user account) on the controller and grants access to the ClusterControl system. The user name of the new account should be the command line argument.
−−delete Deletes an existing user.
−−disable Disables the user (turn on the “disabled” flag of the user). The users that are disabled are not able to log in.
−−enable Enables the user. This will clear the “disabled” flag of the user so that the user will be able to log in again. The “suspended” flag will also be cleared, the failed login counter set to 0 and the date and time of the last failed login gets deleted, so users who are suspended for failed login attempts will also be able to log in.
−−list List the users registered for the ClusterControl controller.
−−list-groups List the user groups maintained by the ClusterControl controller.
−−list-keys Lists the public keys registered in the controller for the specified user. Please note that viewing the public keys require special privileges, ordinary users can not view the public keys of other users.
--preferences To add/set/delete the preferences for a given --cmon-user. For example the preferences for UI. Is used with other input options like --preferences-to-set or --preferences-to-delete options.
--get-preferences To get the preferences for a given --cmon-user.
−−password-reset Resets the password for the user using the “forgot password” email schema. This option must be used twice to change the password, once without a token to send an email about the password reset and once with the token received in the email.
−−set Changes the specified properties of the user.
−−remove-from-group Removes the user from a group.
−−set-group Sets the primary group for the specified user. The primary group is the first group the user belongs to. This option will remove the user from this primary group and add it to the group specified by the --group command-line option.
−−stat Prints detailed information about the specified user(s).
−−whoami Same as --list, but only lists the current user, the user that authenticated on the controller.

Options

Name, shorthand Description
−−cmon-user-u Username on the CMON system
−−group=GROUPNAME Set the name of the group. For example, when a new user is created this option can be used to control what will be the primary group of the new user. It is also possible to filter the users by the group name while listing them.
−−create-group If this command-line option is provided and the group for the new user does not exist the group will be created together with the new user.
−−first-name=NAME Sets the first name of the user.
−−last-name=NAME Sets the last name of the user.
−−public-key-file=FILENAME The name of the file where the public key is stored. Please note that this currently only works with the --add-key option.
--preferences-to-set=LIST List of a given --cmon-user preferences to insert/update to/in the cmon database. It is used with --preferences main option. The LIST consists of the key=value pairs separated by a semicolon (;).
--preferences-to-delete=LIST List of a given --cmon-user preferences to delete from the cmon database. It is used with --preferences main option. The LIST consists of the keys separated by a semicolon (;).
−−title=TITLE The title prefix (e.g. Dr.) for the user.
−−email-address=ADDRESS The email address for the user.
−−new-password=PASSWORD The new password when changing the password.
−−old-password=PASSWORD The old password when changing the password.
−−user-format[=FORMATSTRING] The string controls the format of the printed information about the users. See User Formatting.
−−without-tags=LIST When listing the users this option can be used to limit the list to those users that have none of the specified tags set.
−−with-tags=LIST When listing the users this option can be used to limit the list to those users that have at least one of the tags set.

Creating the First User

To use the s9s command line tool a Cmon User Account is needed to authenticate on the Cmon Controller. These user accounts can be created using the s9s program itself by either authenticating with a pre-existing user account or bootstrapping the user management to create the very first user. The following section describes the authentication and the creation of the first user in detail.

If there is a username specified either in the command line using the --cmon-user (or -u) options or in the configuration file (either the ~/.s9s/s9s.conf or the /etc/s9s.conf file using the cmon_user variable name) the program will try to authenticate with this username. Creating the very first user account is of course not possible this way. The --cmon-user option and the cmon_user the variable is not for specifying what user we want to create, it is for setting what user we want to use for the connection.

If no user name is set for the authentication and user creation is requested the s9s will try to send a request to the controller through a named pipe. This is the only way a user account can be created without authenticating with an existing user account, this is the only way the very first user can be created. Here is an example:

$ s9s user --create \
       --group=admins \
       --generate-key \
       --controller=https://192.168.1.127:9501 \
       --new-password="MyS3cr3tpass" \
       --email-address="[email protected]" \
       admin

Consider the following:

  1. There is no --cmon-user specified, this is the first user we create, we do not have any pre-existing user account. This command line is for creating the very first user. Please check out the Examples section to see how to create additional users.
  2. This is the first run, so we assume no ~/.s9s/s9s.conf configuration file exists, there is no user name there either.
  3. In the example, we create the user with the username “admin”, and it is the command-line argument of the program.
  4. The command specifies the controller to be at https://192.168.1.127:9501. The HTTPS protocol will be used later, but to create this first user the s9s will try to use SSH and sudo to access the controller’s named pipe on the specified host. For this, to work the UNIX user running this command has to have a passwordless SSH and sudo set up to the remote host. If the specified host is the local host the user does not need SSH access, but still needs to be root or have a passwordless sudo access because of course, the named pipe is not accessible for anyone.
  5. Since the UNIX user has no s9s configuration file it will be created. The controller URL and the user name will be stored in it under ~/.s9s/s9s.conf. The next time this user runs the program, it will use this “admin” user unless another user name is set in the command line and will try to connect this controller unless another controller is set in the command line.
  6. The password will be set for the user on the controller, but the password will never be stored in the configuration file.
  7. The --generate-key the option is provided, so a new private/public key pair will be generated, then stored in the ~/s9s/ directory and the public key will be registered on the controller for the new user. The next time the program runs it will find the username in the configuration file, find the private key in place for the user, and will automatically authenticate without a password. The command-line options will always have precedence, so this automatic authentication is simply the default way, the password authentication is always available.
  8. The group for the new user is set to “admins”, so this user will have special privileges. It is always a good idea to create the very first user with special privileges, then other users can be created by this administrator account.

User List

Using the --list and --long command-line options a detailed list of the users can be printed. Here is an example of such a list:

$ s9s user --list --long worf jadzia
A ID UNAME  GNAME EMAIL           REALNAME
- 11 jadzia ds9   [email protected]     Lt. Jadzia Dax
A 12 worf   ds9   [email protected] Lt. Worf
Total: 12

Please note that there are a total of 12 users defined on the system, but only two of those are printed because we filtered the list with the command line arguments.

The list contains the following fields:

Field Description
A Shows the authentication status. If this field shows the letter ‘A’ the user is authenticated with the current connection.
ID Shows the user ID, a unique numerical ID identifying the user.
UNAME The username.
GNAME The name of the primary group of the user. All user belongs to at least one group, the primary group.
EMAIL The email address of the user.
REALNAME The real name of the user consists of first name, last name, and some other parts, printed here as a single string composed of all the available components.

User Formatting

When this command-line option is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The ‘%’ specifiers are ended by field name letters to refer to various properties of the users.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 characters wide and the + or - the sign will always be printed with the number. The properties of the user are encoded by letters. For example, the letter N encodes the username field, so username of the user will be substituted. Standard \ the notation is also available, \n for example, encodes a new-line character. The %16N for example, the letter N encodes the username field, so username of the user will be substituted. Standard \ notation is also available, \n for example, encodes a new-line character.

The s9s command supports the following fields:

Field Description
d The distinguished name of the user. This currently has meaning only for users originated from an LDAP server.
F The full name of the user.
f The first name of the user.
G The names of groups the given user belongs to.
I The unique numerical ID of the user.
j The job title of the user.
l The last name of the user.
M The email address of the user.
m The middle name of the user.
N The username for the user.
o The origin of the user, the place that used to store the original instance of the user. The possible values are “CmonDb” for users from the Cmon Database or “LDAP” for users from the LDAP server.
P The CDT path of the user.
t The title of the user (e.g. “Dr.”).

Examples

  • Create a remote s9s user and generate an SSH key for the user:

    $ s9s user --create \
            --generate-key \
            --cmon-user=dba \
            --controller="https://10.0.1.12:9501"
    
  • List out all users:

    $ s9s user --list --long
    A ID UNAME      GNAME  EMAIL REALNAME
    -  1 system     admins -     System User
    -  2 nobody     nobody -     -
    A  3 dba        users  -     -
    -  4 remote_dba users  -     -
    
  • List users that have at least one of the tags set.

    $ s9s user \
            --list \
            --long \
            --with-tags="ds9;tng"
    
  • Register a new public key for the active user:

    $ s9s user \
            --add-key \
            --public-key-file=/home/pipas/.ssh/id_rsa.pub \
            --public-key-name=The_SSH_key
    
  • Add user “myuser” into group “admins”:

    $ s9s user \
            --add-to-group \
            --group=admins \
            myuser
    
  • Set a new password for user “pipas”:

    $ s9s user \
            --change-password \
            --new-password="MyS3cr3tpass" \
            pipas
    
  • Create a new user called “john” under group “dba”:

    $ s9s user \
            --create \
            --group=dba \
            --create-group \
            --generate-key \
            --new-password=s3cr3tP455 \
            --email-address=[email protected] \
            --first-name=John \
            --last-name=Doe \
            --batch \
            john
    
  • Delete an existing user called “mydba”:

    $ s9s user \
            --delete \
            mydba
    
  • Disable user nobody:

    $ s9s user \
            --cmon-user=system \
            --password=secret \
            --disable \
            nobody
    
  • Enable user nobody:

    $ s9s user \
            --cmon-user=system \
            --password=secret \
            --enable \
            nobody
    
  • Reset the password for a user called “dba”. One has to obtain the one-time token which will be sent to the registered email address of the corresponding user, followed by the actual password modification command with --token and --new-password parameters:

    $ s9s user \
            --password-reset \
            dba
    $ ## check the mail inbox of the respective user
    $ s9s user \
            --password-reset \
            --token="98197ee4b5584cedba88ef1f583a1258" \
            --new-password="newp455w0rd"
            dba
    
  • Set a primary group for user “dba”:

    $ s9s user \
            --set-group \
            --group=admins \
            dba
    
  • To add/set/delete the preferences for a given --cmon-user.

    $ s9s user --cmon-user=s9s --password=pass1299 --preferences --preferences-to-set="key1=value1;key2=value2;key3=value3"
    $ s9s user --cmon-user=s9s --password=pass1299 --preferences --preferences-to-set="key2=QWERTY"
    $ s9s user --cmon-user=s9s --password=pass1299 --preferences --preferences-to-delete="key1;key2"
    
  • To get the preferences for a given --cmon-user.

    $ s9s user --cmon-user=s9s --password=***** --get-preferences