Table of Contents
Create, manage, and manipulate clusters.
Usage
s9s cluster {command} {options}
Command
Name, shorthand | Description |
---|---|
−−add-node |
Adds a new node (server) to the cluster or to be more precise creates a new job that will eventually add a new node to the cluster. The name (or IP address) of the node should be specified using the --nodes command-line option. See Node List. |
−−available-upgrades |
Shows the available packages to upgrade the cluster with. |
−−change-config |
Changes the configuration values for the cluster. The cluster configuration in this context is the Cmon Controller’s configuration for the given cluster. |
−−check-hosts |
Checks the hosts before installing a cluster. |
−−check-pkg-upgrades |
Checks available cluster packages upgrades. |
−−collect-logs |
Creates a job that will collect the log files from the nodes of the cluster. |
−−create |
Creates a new cluster. When this command-line option is provided the program will contact the controller and register a new job that will eventually create a new cluster. |
−−create-account |
Creates a new account to be used on the cluster to access the database(s). |
−−create-database |
Creates a database on the cluster. |
−−create-report |
When this command-line option is provided a new job will be started that will create a report. After the job is executed the report will be available on the controller. If the --output-dir the command-line option is provided the report will be created in the given directory on the controller host. |
−−delete-account |
Deletes an existing account from the cluster. |
−−delete-database |
Creates a new job that will delete a database from the cluster. |
--deploy-agents |
Starts a job to deploy agent-based monitoring using Prometheus and exporters to the nodes. |
−−disable-recovery |
Creates a new job that will disable the auto-recovery for the cluster (both cluster auto-recovery and node auto-recovery). The job can optionally be used to also register a maintenance period for the cluster. |
--disable-ssl |
Disable SSL connections on the nodes. |
−−drop |
Drops cluster from the controller. |
−−enable-recovery |
Creates a job that will enable auto-recovery for both the cluster and the nodes in the cluster. |
--enable-ssl |
Enable SSL connections on the nodes. |
−−import-config |
Creates a job that will import all the configuration files from the nodes of the cluster. |
−−import-sql-users |
Imports SQL users to the load balancer. Depending on the actual load balancer this can be only import or complete update of the user authentication information known by the load balancer. This is only supported by PgBouncer at the moment. Adds a new node (server) to the cluster or to be more precise creates a new job that will eventually add a new node to the cluster. The load balancer nodes where the users are to be imported shall be specified using the --nodes command line option. See Node List. |
−−list , -L |
Lists the clusters. |
−−list-config |
This command-line option can be used to print the configuration values for the cluster. The cluster configuration in this context is the Cmon Controller’s configuration for the given cluster. |
−−list-databases |
Lists the databases found on the cluster. Please note that if the cluster has a lot of databases, this option might not show some of them. Sampling a huge number of databases would generate a high load and so the controller has an upper limit built into it. |
−−ping |
Checks the connection to the controller. |
−−promote-slave |
Promotes a slave node to become a master. This main option will of course work only on clusters where it is meaningful, where there are slaves and masters are possible. |
--reconfigure-node |
Reconfigures specified nodes of the cluster. |
--reinstall-node |
Reinstalls software and also reconfigures it on nodes. |
−−register |
Registers an existing cluster in the controller. This option is very similar to the --create option, but it of course will not install a new cluster, it just registers one. |
−−remove-node |
Removes a node from the cluster (creates a new job that will remove the node from the cluster). The name (or IP address) of the node should be specified using the --nodes command line option. See Node List. |
−−rolling-restart |
Restarts the nodes (one node at a time) without stopping the cluster. |
−−set-read-only |
Creates a job that when executed will set the entire cluster into read-only mode. Please note that not every cluster type supports the read-only mode. |
−−start |
Creates a new job to start the cluster. |
−−stat |
Prints the details of one or more clusters. |
−−stop |
Creates and registers a new job that will stop the cluster when executed. |
--sync |
Synchronize cluster list with UI Frontend. |
−−upgrade-cluster |
Upgrades cluster packages while keeping the same major version. |
−−upgrade-to-version |
The new newer major version to upgrade to. Without this, only a minor upgrade will be done. |
−−upgrade-method |
Strategy for doing a major upgrade. For PostgreSQL, there are two methods supported: copy and link. The default value is the copy method. The copy method will copy all data by doing a backup of the old version and restoring it to the new version. With the link method, the data files won’t be copied, instead, hard links will be created to the old version’s data files. |
Options
Name, shorthand | Description |
---|---|
−−backup-id=NUMBER |
The ID of a backup to be restored on the newly created cluster. |
−−batch |
Print no messages. If the application created a job print only the job ID number and exit. If the command prints data do not use syntax highlight, headers, or totals, only the pure table to be processed using filters. |
−−cluster-format=FORMATSTRING |
The string controls the format of the printed information about clusters. See Cluster Format. |
−−cluster-id=ID , -i |
The ID of the cluster to manipulate. |
−−cluster-name=NAME , -n |
Sets the cluster name. If the operation creates a new cluster this will be the name of the new cluster. |
−−cluster-type=TYPENAME |
The type of cluster to install. Currently, the following types are supported: galera , mysqlreplication , groupreplication (or group_replication ), ndb (or ndbcluster ), mongodb (MongoDB ReplicaSet only) and postgresql . |
−−config-template=FILENAME |
Use the specified file as a configuration template to create the configuration file for the new cluster. |
--create-local-repository |
Create a local software (APT/YUM) repository mirror when installing software packages. Using this command line option it is possible to deploy clusters and add nodes offline, without a working internet connection. |
−−datadir=DIRECTORY |
The directory on the node(s) that will hold the data. The primary use for this command-line option is to set the data directory path when a cluster is created. |
−−db-admin=USERNAME |
The user name of the database administrator (e.g. ‘root’). |
−−db-admin-passwd=PASSWD |
The password for the database admin. |
−−donor=ADDRESS |
Currently, this option is used when starting a cluster. It can be used to control which node will be started first and used for the others as donors. |
--enterprise-token=TOKEN |
The customer’s Repo/Download Token for an Enterprise Database. |
−−job-tags=LIST |
Tags for the job if a job is created. |
--keep-firewall |
When not specified the CLI will pass the disable firewall option to create cluster and node addition operations. To keep your firewall settings you may pass this option. This option can also be set in the s9s configuration file using the keep_firewall keyword (check s9s.conf(5) for further details). |
--local-repository=NAME |
Use a local repository mirror created by ClusterControl for software deployment. |
−−long , -l |
Print the detailed list. |
−−no-header |
Do not print headers for tables. |
−−no-install |
Skip the cluster software installation part. Assume all software is installed on the node(s). This command-line option is considered when installing a new cluster or adding a new node to an existing cluster. |
−−nodes=NODE_LIST |
List of nodes to work with. See Node List. |
−−os-user=USERNAME |
The name of the remote user that is used to gain SSH access on the remote nodes. If this command-line option is omitted the name of the local user will be used on the remote host too. |
−−os-key-file=PATH |
The path of the SSH key to install on a new container to allow the user to log in. This command-line option can be passed when a new container is created, the argument of the option should be the path of the private key stored on the controller. Although the path of the private key file is passed only the public key will be uploaded to the new container. |
−−output-dir=DIR |
The directory where the files are created. Use in conjunction with--create-report command. |
−−percona-client-id=CLIENTID |
The client ID for the Percona Pro repository. |
−−percona-pro-token=TOKEN |
The token for the Percona Pro repository. |
−−provider-version=VERSION |
The version string of the software to be installed. |
−−remote-cluster-id=ID |
The remote cluster-ID is used for cluster creation when cluster-to-cluster replication is to be installed. Please note that not all the cluster types support cluster-to-cluster replication. |
−−semi-sync=[true|false] |
Specifies the semi-sync mode for MySQL Replication. |
−−use-internal-repos |
Use internal repositories when installing software packages. Using this command-line option it is possible to deploy clusters and add nodes offline, without a working internet connection. The internal repositories have to be set up in advance. |
−−vendor=VENDOR |
The name of the software vendor to be installed. |
−−wait |
Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set. |
--with-ssl |
Sets up SSL while creating a new cluster. |
--with-database |
Create a new database for the account when creating a new database user account. |
--without-ssl |
If this option is provided SSL will not be set up while creating a new cluster. |
−−with-tags=LIST |
Limit the list of printed clusters by tags. See Cluster Tagging. |
−−without-tags=LIST |
Limit the list of printed clusters by tags. See Cluster Tagging. |
−−with-timescaledb |
Install the TimescaleDB option when creating a new cluster. This is currently only supported on PostgreSQL systems. |
ACCOUNT, DATABASE & CONFIGURATION MANAGEMENT | |
−−account=NAME[:PASSWD][@HOST] |
Account to be created on the cluster. |
−−db-name=NAME |
The name of the database. |
−−opt-group=NAME |
The option group for configuration. |
−−opt-name=NAME |
The name of the configuration item. |
−−opt-value=VALUE |
The value for the configuration item. |
−−with-database |
Create a database for the user too. |
CONTAINER & CLOUD | |
−−cloud=PROVIDER |
This option can be used when new container(s) are created. The name of the cloud provider where the new container will be created. This command-line option can also be used to filter the list of the containers when used together with one of the --list or --stat options. |
−−containers=LIST |
A list of containers to be created and used by the created job. This command-line option can be used to create containers (virtual machines) and install clusters on them or just add them to an existing cluster as nodes. Please check s9s container for details. |
−−credential-id=ID |
The cloud credential ID that should be used when creating a new container. This is an optional value, if not provided the controller will find the credential to be used by the cloud name and the chosen region. |
−−firewalls=LIST |
List of firewall (security groups) IDs separated by, or; to be used for newly created containers. Check s9s-container for further details. |
−−generate-key |
Create a new SSH key pair when creating new containers. If this command-line option was provided a new SSH key pair will be created and registered for a new user account to provide SSH access to the new container(s). If the command creates more than one container the same one keypair will be registered for all. The username will be the username of the authenticated cmon-user. This can be overruled by the --os-user command line option. When the job creates a new cluster the generated keypair will be registered for the cluster and the file path will be saved into the cluster’s Cmon configuration file. When adding a node to such a cluster this --generate-key option should not be passed, the controller will automatically re-use the previously created key pair. |
−−image=NAME |
The name of the image from which the new container will be created. This option is not mandatory, when a new container is created the controller can choose an image if it is needed. To find out what images are supported by the registered container servers please issue the s9s server --list-images command. |
−−image-os-user=NAME |
The name of the initial OS user is defined in the image for the first login. Use this option to create containers based on custom images. |
−−os-password=PASSWORD |
This command-line option can be passed when creating new containers to set the password for the user that will be created on the container. Please note that some virtualization backends might not support passwords, only keys. |
−−subnet-id=ID |
This option can be used when new containers are created to set the subnet ID for the container. To find out what subnets are supported by the registered container servers please issue the s9s server --list-subnets command. |
−−template=NAME |
The name of the container template. See Container Template. |
−−volumes=LIST |
When a new container is created this command-line option can be used to pass a list of volumes that will be created for the container. The list can contain one or more volumes separated by the ; character. Every volume consists of three properties separated by the : character, a volume name, the volume size in gigabytes, and a volume type that is either “HDD” or “SSD”. The string vol1:5:hdd;vol2:10:hdd for example defines two hard-disk volumes, one 5GByte, and one 10GByte. For convenience the volume name and the type can be omitted, so that automatically generated volume names are used. |
−−vpc-id=ID |
This option can be used when new containers are created to set the VPC ID for the container. To find out what VPCs are supported by the registered container servers please issue the s9s server --list-subnets --long command. |
LOAD BALANCER | |
−−admin-password=PASSWORD |
The password for the administrator of load balancers. |
−−admin-user=USERNAME |
The username for the administrator of load balancers. |
−−dont-import-accounts |
If this option is provided the database accounts will not be imported after the load balancer is installed and added to the cluster. The accounts can be imported later, but it is not going to be part of the load balancer installation performed by the controller. |
−−haproxy-config-template=FILENAME |
Configuration template for the HAProxy installation. |
−−monitor-password=PASSWORD |
The password of the monitoring use of the load balancer. |
−−monitor-user=USERNAME |
The username of the monitoring use of the load balancer. |
−−maxscale-mysql-user=USERNAME |
The MySQL username of the Maxscale balancer. |
−−maxscale-mysql-password=PASSWORD |
The password of the MySQL user of the Maxscale balancer. |
SSL | |
--ssl-ca=PATH |
The SSL CA file path on the controller. |
--ssl-cert=PATH |
The SSL certificate file path on the controller. |
--ssl-key=PATH |
The SSL key file path on the controller. |
--ssl-pass=PASSWD |
The password for an existing CA private key when registering a cluster. |
--move-certs-dir=PATH |
The path to the directory where the SSL certificates are stored will be moved on the imported cluster. (Please ommit initial /var/lib/cmon/ca ). |
Cluster List
Using the --list
and --long
command-line options a detailed list of the clusters can be printed. Here is an example of such a list:
$ s9s cluster --list --long
ID STATE TYPE OWNER GROUP NAME COMMENT
1 STARTED replication pipas users mysqlrep All nodes are operational.
Total: 1
The list contains the following fields:
Field | Description |
---|---|
ID | The cluster ID of the given cluster. |
STATE | A short string describing the state of the cluster. Possible values are MGMD_NO_CONTACT , STARTED , NOT_STARTED , DEGRADED , FAILURE , SHUTTING_DOWN , RECOVERING , STARTING , UNKNOWN , STOPPED . |
TYPE | The type of the cluster. Possible values are mysqlcluster , replication , galera , group_repl , mongodb , mysql_single , postgresql_single . |
OWNER | The user name of the owner of the cluster. |
GROUP | The group owner’s name. |
NAME | The name of the cluster. |
COMMENT | A short human-readable description of the current state of the cluster. |
Node List
The list of nodes or hosts is enumerated in a special string using a semicolon as a field separator (e.g. 192.168.1.1;192.168.1.2). The strings in the node list are URLs that can have the following protocols:
URI | Description |
---|---|
mysql:// |
The protocol to install and handle MySQL servers. |
ndbd:// |
The protocol for MySQL Cluster (NDB) data node servers. |
ndb_mgmd:// |
The protocol for MySQL Cluster (NDB) management node servers. The mgmd:// notation is also accepted. |
haproxy:// |
Used to create and manipulate HaProxy servers. |
proxysql:// |
Use this to install and handle ProxySql servers. |
maxscale:// |
The protocol to install and handle MaxScale servers. |
mongos:// |
The protocol to install and handle mongo router servers. |
mongocfg:// |
The protocol to install and handle mongo config servers |
mongodb:// |
The protocol to install and handle mongo data servers. |
pgbackrest:// |
The protocol to install and handle the PgBackRest backup tool. |
pgbouncer:// |
The protocol to install and handle PgBouncer servers. |
pbmagent:// |
The protocol to install and handle PBMagent (Percona Backup for MongoDB agent) servers. |
Cluster Format
The string controls the format of the printed information about clusters. When this command-line option is used, the specified information will be printed instead of the default columns. The format string uses the %
character to mark variable fields and flag characters as they are specified in the standard printf()
C library functions. The %
specifiers are ended by field name letters to refer to various properties of the clusters.
The %+12I
format string for example has the +12
flag characters in it with the standard meaning: the field will be 12 characters wide and the +
or -
the sign will always be printed with the number. The properties of the message are encoded by letters. The in the%-5I
for example the letterI
encodes the “cluster ID” field, so the numerical ID of the cluster will be substituted. Standard \
a notation is also available, \n
for example encodes a new-line character.
The s9s-tools support the following fields:
Field | Description |
---|---|
a | The number of active alarms on the cluster. |
C | The configuration file for the cluster. |
c | The total number of CPU cores in the cluster. Please note that this number may be affected by hyper-threading. When a computer has 2 identical CPUs, with four cores each, and uses 2x hyper-threading it will count as 2x4x2 = 16. |
D | The domain name of the controller of the cluster. This is the string one would get if executed the “domain name” command on the controller host. |
G | The name of the group owner of the cluster. |
H | The hostname of the controller of the cluster. This is the string one would get if executed the “hostname” command on the controller host. |
h | The number of the hosts in the cluster including the controller itself. |
I | The numerical ID of the cluster. |
i | The total number of monitored disk devices (partitions) in the cluster. |
k | The total number of disk bytes found on the monitored devices in the cluster. This is a double-precision floating-point number measured in Terabytes. With the f modifier (e.g. %6.2fk) this will report the free disk space in TeraBytes. |
L | The log file of the cluster. |
M | A human-readable short message that describes the state of the cluster. |
m | The size of the memory of all the hosts in the cluster added together, measured in GBytes. This value is represented by a double-precision floating-point number, so formatting it with precision (e.g. %6.2m ) is possible. When used with the f modifier (e.g.%6.2fm ) this reports the free memory, the memory that is available for allocation, used for cache, or used for buffers. |
N | The name of the cluster. |
n | The total number of monitored network interfaces in the cluster. |
O | The name of the owner of the cluster. |
P | The CDT path of the cluster. |
S | The state of the cluster. |
T | The type of the cluster |
t | The total network traffic (both received and transmitted) measured in MBytes/seconds found in the cluster. |
V | The vendor and the version of the main software (e.g. the MySQL server) on the node. |
U | The number of physical CPUs on the host. |
u | The CPU usage percent found on the cluster. |
w | The total swap space found in the cluster measured in GigaBytes. With the f modifier (e.g. %6.2fk ) this reports the free swap space in GigaBytes. |
% | The % character itself. |
Cluster Tagging
The concept is very similar to the hash-tags used by popular services such as Twitter and Instagram. A cluster can be created with tags, by using --with-tags
the option:
$ s9s cluster --create \
--cluster-name="MyMariaDBGaleraCluster" \
--cluster-type=galera \
--provider-version="10.5" \
--vendor=mariadb \
--nodes="mysql://10.10.10.10?hostname_internal=2323" \
--os-user=vagrant \
--os-key-file=~/.ssh/id_rsa \
--with-tags="MDB;DC1;PRODUCTION" \
--log \
--print-request
Multiple values are supported using a semi-colon as a delimiter. The tag values are case-sensitive.
An existing cluster can be tagged with --add-tag
and --tag
options under by specifying the CMON tree path, retrievable using the tree command. To retrieve the “tree path”, one has to list out the CMON object tree and look for column “NAME”, as shown in the following example:
$ s9s tree --list --long
MODE SIZE OWNER GROUP NAME
crwxrwx--- - system admins MariaDB Replication 10.3
srwxrwxrwx - system admins localhost
drwxrwxr-- 1, 0 system admins groups
urwxr--r-- - admin admins admin
urwxr--r-- - nobody admins nobody
urwxr--r-- - system admins system
Then, specify the tree path as /
+ tree name (“MariaDB Replication 10.4”), as shown in the following:
$ s9s tree --add-tag --tag="REPLICATION;PRODUCTION" "/MariaDB Replication 10.4"
Tag is added.
To show all clusters having a certain tag:
$ s9s cluster --list --long --with-tags="PRODUCTION"
Show all cluster that does not have a certain tag:
$ s9s cluster --list --long --without-tags="PRODUCTION"
To filter using multiple tag values:
$ s9s cluster --list --long --with-tags="PRODUCTION;DEV"
Examples
Create a three-node Percona XtraDB Cluster 5.7 cluster, with OS user vagrant:
$ s9s cluster --create \
--cluster-type=galera \
--nodes="10.10.10.10;10.10.10.11;10.10.10.12" \
--vendor=percona \
--provider-version=8.0 \
--db-admin-passwd='pa$$word' \
--os-user=vagrant \
--os-key-file=/home/vagrant/.ssh/id_rsa \
--cluster-name='Percona XtraDB Cluster 5.7'
Create a three-node MongoDB Replica Set 5.0 by MongoDB Inc (formerly 10gen) and use the default /root/.ssh/id_rsa
as the SSH key, and let the deployment job run in the foreground:
$ s9s cluster --create \
--cluster-type=mongodb \
--nodes="10.0.0.148;10.0.0.189;10.0.0.219" \
--vendor=10gen \
--provider-version='5.0' \
--os-user=root \
--db-admin='admin' \
--db-admin-passwd='MyS3cr3tPass' \
--cluster-name='MongoDB ReplicaSet 5.0' \
--wait
An example for creating a MongoDB Sharded Cluster with 3 mongos, 3 mongo config, and one shard consists of a three-node replica set called ‘replset2’:
$ s9s cluster --create \
--cluster-type=mongodb \
--vendor=10gen \
--provider-version=5.0 \
--db-admin=adminuser \
--db-admin-passwd=adminpwd \
--os-user=root \
--os-key-file=/root/.ssh/id_rsa \
--nodes="mongos://192.168.1.11;mongos://192.168.1.12;mongos://192.168.1.12;mongocfg://192.168.1.11;mongocfg://192.168.1.12;mongocfg://192.168.1.13;192.168.1.14?priority=5.0;192.168.1.15?arbiter_only=true;192.168.1.16?priority=2;192.168.1.17?rs=replset2;192.168.1.18?rs=replset2&arbiter_only=yes;192.168.1.19?rs=replset2&slave_delay=3&priority=0"
Import and existing Percona XtraDB Cluster 8.0 and let the deployment job running in the foreground (provided passwordless SSH from ClusterControl node to all database nodes have been set up correctly):
$ s9s cluster --register \
--cluster-type=galera \
--nodes="192.168.100.34;192.168.100.35;192.168.100.36" \
--vendor=percona \
--provider-version=8.0 \
--db-admin="root" \
--db-admin-passwd="root123" \
--os-user=root \
--os-key-file=/root/.ssh/id_rsa \
--cluster-name="My DB Cluster" \
--wait
Create a MySQL 8.0 replication cluster by Oracle with multiple master and slaves (note the ?
sign to identify the node’s role in the --nodes
parameter):
$ s9s cluster --create \
--cluster-type=mysqlreplication \
--nodes="192.168.1.117?master;192.168.1.113?slave;192.168.1.115?slave;192.168.1.116?master;192.168.1.118?slave;192.168.1.119?slave;" \
--vendor=oracle \
--db-admin="root" \
--db-admin-passwd="root123" \
--cluster-name=ft_replication_23986 \
--provider-version=8.0 \
--log
Create a Redis Cluster v7 with six Redis nodes:
$s9s cluster --create \
--cluster-type=redis-sharded \
--redis-port=6379
--redis-bus-port=16579 \
--vendor=redis \
--provider-version=7 \
--node-timeout-ms=5000 \
--replica-validity-factor=10 \
--db-admin=s9s-user \
--db-admin-pass=s9ss9s \
--nodes="redis-primary://rlc2:6479;redis-replica://rlc3:6479;redis-primary://rlc4:6479;redis-replica://rlc5:6479;redis-primary://rlc6;redis-replica://rlc7" \
--os-user=root \
--os-key-file=/root/.ssh/id_rsa \
--cluster-name="My Redis Cluster v7" \
--log \
--print-request
Create a Redis Sentinel v6 with one master and two slaves:
$s9s cluster --create \
--cluster-type=redis \
--nodes="redis://rlc2;redis://rlc3;redis://rlc4;redis-sentinel://rlc2;redis-sentinel://rlc3;redis-sentinel://rlc4" \
--os-user=root \
--os-key-file=/root/.ssh/id_rsa \
--vendor=redis \
--provider-version=6 \
--log \
--print-request \
--cluster-name="My Redis Sentinel v6"
Import and existing Redis Cluster and let the deployment job run in the foreground (provided passwordless SSH from ClusterControl node to all database nodes have been set up correctly):
$ s9s cluster --register \
--cluster-type=redis-sharded \
--redis-port=6379 \
--db-admin=s9s-user \
--db-admin-passwd=s9ss9s \
--os-user=root \
--os-key-file=/root/.ssh/id_rsa \
--nodes="redis-primary://rlc3:6479" \
--vendor=redis \
--log \
--cluster-name="My Redis Cluster" \
--print-request \
--wait
Create a PostgreSQL 12 streaming replication cluster with one master and two slaves (note the ?
sign to identify the node’s role in the --nodes
parameter):
$ s9s cluster --create \
--cluster-type=postgresql \
--nodes="192.168.1.81?master;192.168.1.82?slave;192.168.1.83?slave;" \
--db-admin="postgres" \
--db-admin-passwd="mySuperStongP455w0rd" \
--cluster-name=ft_replication_23986 \
--os-user=vagrant \
--os-key-file=/home/vagrant/.ssh/id_rsa \
--provider-version=12 \
--log
List all clusters with more details:
$ s9s cluster --list --long
Delete a cluster with cluster ID 1:
$ s9s cluster --delete --cluster-id=1
Add a new database node on Cluster ID 1:
$ s9s cluster --add-node \
--nodes=10.10.10.14 \
--cluster-id=1 \
--wait
Delete a database from the cluster name galera_001.
$ s9s cluster \
--delete-database \
--print-request \
--cluster-name="galera_001" \
--db-name="my_database" \
--log
Add a data node to an existing MongoDB Sharded Cluster with cluster ID 12 having replica set name ‘replset2’:
$ s9s cluster --add-node \
--cluster-id=12 \
--nodes="mongodb://192.168.1.20?rs=replset2"
Create an HAProxy load balancer, 192.168.55.198 on cluster ID 1:
$ s9s cluster --add-node \
--cluster-id=1 \
--nodes="haproxy://192.168.55.198" \
--wait
Remove a database node from cluster ID 1 as a background job:
$ s9s cluster --remove-node \
--nodes=10.10.10.13 \
--cluster-id=1
Check if the hosts are part of other cluster and accessible from ClusterControl:
$ s9s cluster --check-hosts \
--nodes="10.0.0.148;10.0.0.189;10.0.0.219"
Schedule a rolling restart of the cluster 20 minutes from now:
$ s9s cluster --rolling-restart \
--cluster-id=1 \
--schedule="$(date -d 'now + 20 min')"
Create a database on the cluster with the given name:
$ s9s cluster --create-database \
--cluster-id=2 \
--db-name=my_shopping_db
Create a database account on the cluster and also create a new database to be used by the new user. Grant all access to the new database for the new user:
$ s9s cluster --create-account \
--cluster-id=1 \
--account="john:[email protected]" \
--with-database
Create a cluster and tag it using the –with-tags option:
$ s9s cluster --create \
--cluster-name="TAGGED_MDB" \
--cluster-type=galera \
--provider-version="10.4" \
--vendor=mariadb \
--nodes="mysql://10.10.10.14:3306" \
--os-user=vagrant \
--with-tags="MDB;DC1;PRODUCTION" \
--log
List all databases for every cluster:
$ s9s cluster --list-databases --long
SIZE #TBL #ROWS OWNER GROUP CLUSTER DATABASE
0 0 - system admins MySQL Rep Oracle 8.0 sys
381681664 7 690984 system admins MySQL Rep Oracle 8.0 db5
381681664 7 690984 system admins MySQL Rep Oracle 8.0 db4
599785472 11 1083295 system admins MySQL Rep Oracle 8.0 db3
272629760 5 493560 system admins MySQL Rep Oracle 8.0 db2
381681664 7 690984 system admins MySQL Rep Oracle 8.0 db1
7340032 2 0 system admins PostgreSQL 13 postgres
7340032 0 0 system admins PostgreSQL 13 template1
7340032 0 0 system admins PostgreSQL 13 template0
7340032 0 0 system admins PostgreSQL 13 db1
7340032 0 0 system admins PostgreSQL 13 db2
Total: 11 databases, 2059403264, 39 tables.
Show all clusters having a certain tag (an existing cluster can be tagged using s9s tree), with multiple tags separated by a semi-colon:
$ s9s cluster --list \
--long \
--with-tags="PRODUCTION;big_cluster"
List out CMON configuration options for cluster ID 2:
$ s9s cluster --list-config --cluster-id=2
Change a CMON configuration option called diskspace_warning
for cluster ID 2 (the configuration change will be applied on CMON runtime and configuration file, /etc/cmon.d/cmon_2.cnf
):
$ s9s cluster --change-config \
--cluster-id=2 \
--opt-name=diskspace_warning \
--opt-value=70
Upgrade cluster packages while keeping the major version for cluster ID 1.
$ s9s cluster \
--upgrade-cluster \
--cluster-id=1 \
--nodes="192.168.0.84:5433;192.168.0.85" \
--wait
Upgrade cluster packages to a newer major version for cluster ID 1.
$ s9s cluster \
--upgrade-cluster \
--upgrade-to-version=12 \
--cluster-id=1\
-- log
Upgrade cluster packages to a newer major version for cluster ID 1 with a specific upgrade method.
$ s9s cluster \
--upgrade-cluster \
--upgrade-to-version=12 \
--upgrade-method=link \
--cluster-id=1\
-- log