1. Home
  2. Docs
  3. ClusterControl
  4. User Guide (GUI)
  5. MySQL/MariaDB
  6. Manage

Manage

Hosts

Lists of hosts being managed by ClusterControl for the specific cluster. This includes:

  • ClusterControl node
  • MySQL nodes (Galera, replication, group replication, and standalone)
  • MySQL slave nodes (Galera and replication)
  • garbd nodes (Galera)
  • HAProxy nodes (Galera and MySQL Cluster)
  • Keepalived nodes
  • MaxScale nodes
  • ProxySQL nodes
  • MySQL API nodes (MySQL Cluster)
  • Management nodes (MySQL Cluster)
  • Data nodes (MySQL Cluster)

The list also contains the respective hosts’ operating system, host status, ping time, and process ID of the main role.

Configurations

Manages the configuration files of your database, HAProxy, and Garbd nodes. For MySQL database, changes can be persisted to database variables across one node or a group of nodes at once, dynamic variables are changed directly without a restart.

Note

ClusterControl does not store configuration changes history so there is no versioning at the moment. Only one version exists at one time. It imports the latest configuration files every 30 minutes and overwrites it in the CMON database. This limitation will be improved in the upcoming release where ClusterControl shall support configuration versioning with dynamic import interval.

Field Description
Save
  • Save the changes that you have made and push them to the corresponding node.
Import
  • Re-import configuration if you have:
    • Performed local configuration changes directly on the configuration files.
    • Restarted the MySQL servers or performed a rolling restart after a configuration change.
Change/Set Parameter
  • The selected parameter will be changed or created in the specified group option. ClusterControl will attempt to dynamically set the configuration value if the parameter is valid. Then, the change can be persisted in the configuration file.
  • For example, if you want to turn off the read_only which is a dynamic variable, choose it from the parameter list and specify a new value 0. ClusterControl will then perform the change using the SET GLOBAL statement and make it persisted in the config file accordingly.
Attention

If you change a global system variable, the value is remembered and used ONLY for new connections.

Base Template Files

All services configured by ClusterControl use a base configuration template available under /usr/share/cmon/templates on the ClusterControl node. You can directly modify the file to suit your deployment policy however, this directory will be replaced after a package upgrade.

To make sure your custom configuration template files persist across upgrades, store your template files under the /etc/cmon/templates directory (ClusterControl 1.6.2 and later). When ClusterControl loads up the template file for deployment, files under /etc/cmon/templates will always have higher priority over the files under /usr/share/cmon/templates. If two files having identical names exist on both directories, the one located under /etc/cmon/templates will be used.

The following are template files provided by ClusterControl related to MySQL/MariaDB:

Filename. Description
config.ini.mc MySQL Cluster configuration file (config.ini)
garbd.cnf Galera arbitrator daemon (garbd) configuration file.
haproxy.cfg HAProxy configuration template for Galera Cluster.
haproxy_rw_split.cfg HAProxy configuration template for read-write splitting.
keepalived-1.2.7.conf Legacy Keepalived configuration file (pre 1.2.7). This is deprecated.
keepalived.conf Keepalived configuration file.
keepalived.init Keepalived init script.
MaxScale_2.2_template.cnf MaxScale 2.2 configuration template.
MaxScale_template.cnf MaxScale configuration template.
my.cnf.galera MySQL configuration template for Galera Cluster.
my57.cnf.galera MySQL configuration template for Galera Cluster on MySQL 5.7.
my-cnf-backup-secrets.cnf MySQL configuration template for the generated backup user.
my-root.cnf Config file used by MySQL in order to perform log rotation.
my.cnf.80-pxc MySQL configuration template for Percona XtraDB Cluster 8.0.
my.cnf.galera MySQL configuration template for Galera Cluster.
my.cnf.grouprepl MySQL configuration template for MySQL Group Replication.
my.cnf.gtid_replication MySQL configuration template for MySQL Replication with GTID.
my.cnf.mdb10x-galera MariaDB configuration template for MariaDB Galera 10 and later.
my.cnf.mdb10x-replication MariaDB configuration template for MariaDB Replication 10 and later.
my.cnf.mdb55-galera MariaDB configuration template for MariaDB Galera 5.5.
my.cnf.mysqlcluster MySQL configuration template for MySQL Cluster.
my.cnf.ndb-8.0 MySQL configuration template for MySQL Cluster 8.0.
my.cnf.pxc55 MySQL configuration template for Percona XtraDB Cluster 5.5.
my.cnf.repl57 MySQL configuration template for MySQL Replication 5.7.
my.cnf.repl80 MySQL configuration template for MySQL Replication 8.0.
my.cnf.replication MySQL configuration template for MySQL/MariaDB without MySQL’s GTID.
my57.cnf.galera MySQL configuration template for MySQL Galera 5.7.
mysqlchk.galera MySQL health check script template for Galera Cluster.
mysqlchk.mysql MySQL health check script template for standalone MySQL server.
mysqlchk_rw_split.mysql MySQL health check script template for MySQL Replication (master-slave).
mysqlchk_xinetd Xinetd configuration template for MySQL health check.
mysqld.service.override Systemd unit file template for MySQL service.
mysql_logrotate Log rotation configuration template for MySQL.
proxysql_galera_checker.sh ProxySQL health check script for Galera Cluster.
proxysql_logrotate Log rotation configuration template for ProxySQL.
proxysql_template.cnf ProxySQL configuration template.
Attention

Starting from ClusterControl 1.9.7 (September 2023), ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. Note that the GUI v1 is considered a feature-freeze product with no future development. All new developments will be happening on ClusterControl GUI v2. See User Guide (GUI v2).

Dynamic Variables

There are a number of configuration variables that are configurable dynamically by ClusterControl. These variables are represented with a capital letter enclosed by @, for example, @DATADIR@. The following shows the list of variables supported by ClusterControl for MySQL-based clusters:

Variable Description
@BASEDIR@ Default is /usr. Value specified during cluster deployment takes precedence.
@DATADIR@ Default is /var/lib/mysql. Value specified during cluster deployment takes precedence.
@MYSQL_PORT@ The default is 3306. Value specified during cluster deployment takes precedence.
@BUFFER_POOL_SIZE@ Automatically configured based on the host’s RAM.
@LOG_FILE_SIZE@ Automatically configured based on the host’s RAM.
@LOG_BUFFER_SIZE@ Automatically configured based on the host’s RAM.
@BUFFER_POOL_INSTANCES@ Automatically configured based on the host’s CPU.
@SERVER_ID@ Automatically generated based on member’s server-id.
@SKIP_NAME_RESOLVE@ Automatically configured based on MySQL variables.
@MAX_CONNECTIONS@ Automatically configured based on the host’s RAM.
@ENABLE_PERF_SCHEMA@ Default is disabled. Value specified during cluster deployment takes precedence.
@WSREP_PROVIDER@ Automatically configured based on the Galera vendor.
@HOST@ Automatically configured based on hostname/IP address.
@GCACHE_SIZE@ Automatically configured based on disk space.
@SEGMENTID@ Default is 0. Value specified during cluster deployment takes precedence.
@WSREP_CLUSTER_ADDRESS@ Automatically configured based on members in the cluster.
@WSREP_SST_METHOD@ Automatically configured based on the Galera vendor.
@BACKUP_USER@ Default is backupuser.
@BACKUP_PASSWORD@ Automatically generated and configured for backupuser.
@GARBD_OPTIONS@ Automatically configured based on garbd options.
@READ_ONLY@ Automatically configured based on replication role.
@SEMISYNC@ Default is disabled. Value specified during cluster deployment takes precedence.
@NDB_CONNECTION_POOL@ Automatically configured based on the host’s CPU.
@NDB_CONNECTSTRING@ Automatically configured based on members in the MySQL cluster.
@LOCAL_ADDRESS@ Automatically configured based on the host’s address.
@GROUP_NAME@ Default is grouprepl. Value specified during cluster deployment takes precedence.
@PEERS@ Automatically configured based on members in the Group Replication cluster.

Load Balancer

Manages deployment of load balancers (HAProxy, ProxySQL, and MaxScale), virtual IP address (Keepalived), and Garbd. For the Galera Cluster, it is also possible to add the Galera arbitrator daemon (Garbd) through this interface.

ProxySQL

Introduced in v1.4.0 and exclusive for MySQL-based clusters. By default, ClusterControl deploys ProxySQL in read/write split mode – your read-only traffic will be sent to slaves while your writes will be sent to a writable master by creating two host groups. ProxySQL will also work together with the new automatic failover mechanism added in ClusterControl 1.4.0 – once failover happens, ProxySQL will detect the new writable master and route writes to it. It all happens automatically, without any user intervention.

Deploy ProxySQL

Choose where to install

Specify the host that you want to install ProxySQL. You can use an existing database server or use another new host by specifying the hostname or IPv4 address.

Field  Description
Server Address
  • Specify the hostname or IP address of the host. This host must be accessible via passwordless SSH from the ClusterControl node.
Port
  • ProxySQL load-balanced port. The default is 6033.
Select Version
  • Pick a ProxySQL major version to be installed by ClusterControl. Default is 2.x.

ProxySQL Configuration

Field  Description
Import Configuration
  • Deploys a new ProxySQL based on an existing ProxySQL instance. The source instance must be added first into ClusterControl. Once added, you can choose the source ProxySQL instance from a dropdown list.
Disable Firewall
  • Check the box to disable the firewall (recommended).
Disable AppArmor/SELinux
  • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
Use Native Clustering
  • The ProxySQL server will be created using native ProxySQL clustering. An entry will be created in the proxysql_server table.
  • It is recommended to enable this if you would like to have more than one ProxySQL node. Port 6032 must be reachable between all ProxySQL nodes.

ProxySQL User Credentials

Two ProxySQL users are required, one for administration and another one for monitoring. ClusterControl will create both during deployment. This section will be greyed out if you already have enabled Native Clustering for any existing ProxySQL node because the administration and monitoring users must be identical on all ProxySQL nodes.

Field  Description
Administration User
  • ProxySQL administration user name.
Administration Password
  • Password for Administration User.
Monitor User
  • ProxySQL monitoring user name.
Monitor Password
  • Password for Monitor User.

Add database user

You can use an existing database user (created outside ProxySQL) or you can let ClusterControl create a new database user under this section. ProxySQL works in the middle, between application and backend MySQL servers, so the database users need to be able to connect from the ProxySQL IP address

Field  Description
Use existing DB User
  • DB User: The database user name.
  • DB User Password: Password for DB User.
Create new DB User
  • DB User: The database user name.
  • DB Password: Password for DB Users.
  • DB Name: Database name in “database.table” format. To GRANT against all tables, use a wildcard, for example: mydb.*.
  • Type in the MySQL privilege(s): ClusterControl will load the privilege name along the keypress. Multiple privileges are possible.
Note

The user must exist on the database nodes and is allowed to access from the ProxySQL server.

Select instances to balance

Choose which server to be included in the load balancing set.

Field  Description
Server Instance
  • List of MySQL servers monitored by ClusterControl.
Include
  • Toggle to YES to include it. Otherwise, choose NO.
Max Replication Lag
  • How many seconds of replication lags should be allowed before marking the node as unhealthy. The default value is 10.
Max Connection
  • Maximum connections to be sent to the backend servers. It’s recommended to match or lower than the max_connections value of the backend servers.
Weight
  • This value is used to adjust the server’s weight relative to other servers. All servers will receive a load proportional to their weight relative to the sum of all weights. The higher the weight, the higher the priority.

Implicit Transactions

Field  Description
Are you using implicit transactions?
  • YES – If you rely on SET AUTOCOMMIT=0 to create a transaction.
  • NO – If you explicitly use BEGIN or START TRANSACTION to create a transaction. Choose NO if you are unsure of this part.

Import ProxySQL

If you already have ProxySQL installed in your setup, you can easily import it into ClusterControl to benefit from monitoring and management of the instance.

Existing ProxySQL location

Field  Description
Server Address
  • Specify the hostname or IP address. You can choose from the dropdown list and type in the new host.
Listening Port
  • ProxySQL load-balanced port. The default is 6033.

ProxySQL Configuration

Field  Description
Import Configuration
  • Adds an existing ProxySQL instance and imports the configuration from another existing instance. The source instance must be added first into ClusterControl. Once added, you can choose the source ProxySQL instance from a dropdown list.

ProxySQL User Credentials

Field  Description
Administration User
  • ProxySQL administration user name.
Administration Password
  • Password for Administration User.

HAProxy

Installs and configures an HAProxy instance. ClusterControl will automatically install and configure HAProxy, install mysqlcheck script (for MySQL health checks) on each of the database nodes as part of xinetd service and start the HAProxy service. Once the installation is complete, MySQL will listen on Listen Port (3307 by default) on the configured node.

This feature is idempotent, you can execute it as many times as you want and it will always reinstall everything as configured.

Deploy HAProxy

Field  Description
Server Address
  • Select which host to add the load balancer. If the host is not provisioned in ClusterControl (see Hosts), type in the IP address. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
Policy
  • Choose one of these load-balancing algorithms:
    • leastconn – The server with the lowest number of connections receives the connection.
    • round-robin – Each server is used in turns, according to their weights.
    • source – The same client IP address will always reach the same server as long as no server goes down.
Listen Port (Read/Write)
  • Specify the HAProxy listening port. This will be used as the load-balanced MySQL connection port for read/write connections.
Install for read/write splitting (master-slave replication)
  • Toggled on if you want HAProxy to use another listener port for read-only. A new text box will appear right next to the Listen Port (Read/Write) text box. Default to 3308.

Installation Settings

Field  Description
Overwrite Existing /usr/local/sbin/mysqlchk on targets
  • Toggle on if you want to overwrite any existing MySQL health check script on the load balancer node.
Disable Firewall?
  • Toggle on to disable the firewall (recommended). Otherwise, ClusterControl will not perform this action and the existing firewall rules (if exist) will remain active
Disable SELinux/AppArmor?
  • Toggle on to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).

Advanced Settings

Field  Description
Stats Socket
  • Specify the path to bind a UNIX socket for HAProxy statistics. See stats socket.
Admin Port
  • Port to listen to the HAProxy statistic page.
Admin User
  • Admin username to access the HAProxy statistic page. See stats auth.
Admin Password
Backend Name
  • Name for the backend. No whitespace or tab allowed.
Timeout Server (seconds)
  • Sets the maximum inactivity time on the server side. See timeout server.
Timeout Client (seconds)
  • Sets the maximum inactivity time on the client side. See timeout client.
Max Connections Frontend
  • Sets the maximum per-process number of concurrent connections to the HAProxy instance. See maxconn.
Max Connections Backend/per instance
  • Sets the maximum per-process number of concurrent connections per backend instance. See maxconn.
xinetd allow connections from
  • The specified subnet will be allowed to access the mysqlcheck (or mysqlcheck_rw_split for read/write splitting) as xinetd service, which listens on port 9200 on each of the database nodes. To allow connections from all IP addresses, use the default value, “0.0.0.0/0”.

Server instances in the load balancer

Field  Description
Include
  • Select MySQL servers in your cluster that will be included in the load balancing set.
Role
  • Supported roles:
    • Active – The server is actively used in load balancing.
    • Backup – The server is only used in load balancing when all other non-backup servers are unavailable.
Connection Address
  • Pick the IP address where HAProxy should be listening to on the host.

Import HAProxy

 

Field  Description
HAProxy Address
  • Select which host to add the load balancer. If the host is not provisioned in ClusterControl (see Hosts), type in the IP address. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
cmdline
  • Specify the command line that ClusterControl should use to start the HAProxy service. You can verify this by using ps -ef | grep haproxy and retrieving the full command of how the HAProxy process started. Copy the full command line and paste it into the text field.
Port
  • Port to listen to HAProxy admin/statistic page (if enable).
Admin User
  • Admin username to access the HAProxy statistic page. See stats auth.
Admin Password
LB Name
  • Name for the backend. No whitespace or tab allowed.
HAProxy Config
  • Location of HAProxy configuration file (haproxy.cfg) on the target node.
Stats Socket
  • Specify the path to bind a UNIX socket for HAProxy statistics. See stats socket.
  • Usually, HAProxy writes the socket file to /var/run/haproxy.socket . This is needed by ClusterControl to monitor HAProxy. This is usually defined in the haproxy.cfg file.
Note

You will need an admin user/password set in the HAProxy configuration otherwise you will not see any HAProxy stats.

Keepalived

Keepalived requires two or more HAProxy, ProxySQL or MaxScale instances in order to provide virtual IP address failover. By default, the virtual IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will be automatically failed over to ‘Keepalived 2’ accordingly.

Deploy Keepalived

Field  Description
Load balancer type
  • Supported load balancer type to integrate with Keepalived – HAProxy, ProxySQL, and MaxScale. For ProxySQL, you can deploy more than 2 Keepalived instances.
Keepalived 1
  • Select the primary Keepalived node.
Add Keepalived Instance
  • Shows additional input field for secondary Keepalived node.
Remove Keepalived Instance
  • Hides additional input field for secondary Keepalived node.
Virtual IP
  • Assigns a virtual IP address. The IP address should not exist in any node in the cluster to avoid conflict.
Network Interface
  • Specify a network interface to bind the virtual IP address. This interface must be able to communicate with other Keepalived instances and support IP protocol 112 (VRRP) and unicast.

Import Keepalived

Field  Description
Keepalived 1
  • Specify the IP address or hostname of the primary Keepalived node.
Add Keepalived Instance
  • Shows additional input field for secondary Keepalived node
Remove Keepalived Instance
  • Hides additional input field for secondary Keepalived node.
Virtual IP
  • Assigns a virtual IP address. The IP address should not exist in any node in the cluster to avoid conflict.

Garbd

Exclusive for Galera Cluster. Galera arbitrator daemon (garbd) can be installed to avoid network partitioning or split-brain scenarios.

Deploy Garbd

Field  Description
Server Address
  • Manually specify the new garbd hostname or IP address or select a host from the list. That host cannot be an existing Galera node.
CmdLine
  • Garbd command line to start garbd process on the target node.
Attention

We do not support and is not allowed to deploy garbd on a server where ClusterControl is also running and hosted. There is a tendency that the existing MySQL packages will be removed which is managed by the software packaging tools. However, you are allowed and can import an existing garbd (that means if you installed it manually) if it’s running on a ClusterControl host.

Import Garbd

Field  Description
Garbd Address
  • Manually specify the new garbd hostname or IP address or select a host from the list. That host cannot be an existing Galera node.
Port
  •  The garbd port. Default is 4567.
CmdLine
  • Garbd command line to start garbd process on the target node.

MaxScale

MaxScale is an intelligent proxy that allows the forwarding of database statements to one or more database servers using complex rules, a semantic understanding of the database statements and the roles of the various servers within the backend cluster of databases.

You can deploy or import the existing MaxScale node as a load balancer and query router for your Galera Cluster, MySQL/MariaDB replication and MySQL Cluster. For new deployment using ClusterControl, by default it will create two production services:

  • RW – Implements read-write split access.
  • RR – Implements round-robin access.

To remove MaxScale, go to ClusterControl → Nodes → MaxScale node and click on the icon next to it. We have published a blog post with a deployment example in this blog post, How to Deploy and Manage MaxScale using ClusterControl.

Deploy MaxScale

Use this wizard to install MariaDB MaxScale as MySQL/MariaDB load balancer.

Field  Description
Server Address
  • The IP address of the node where MaxScale will be installed. ClusterControl must be able to perform passwordless SSH to this host.
MaxScale Admin Username
  • MaxScale admin username. Default is ‘admin’.
MaxScale Admin Password
  • Password for MaxScale Admin Username. Default is ‘MariaDB’.
MaxScale MySQL Username
  • MariaDB/MySQL user that will be used by MaxScale to access and monitor the MariaDB/MySQL nodes in your infrastructure.
MaxScale MySQL Password
  • Password of MaxScale MySQL Username.
Threads
  • How many threads MaxScale is allowed to use.
CLI Port (Port for command line)
  • Port for MaxAdmin command-line interface. Default is 6603
RR Port (Port for round robin listener)
  • Port for round-robin listener. The default is 4006.
RW Port (Port for read/write split listener)
  • Port for the read-write split listener. The default is 4008.
Debug Port (Port for debug information)
  • Port for MaxScale debugs information. The default is 4442.
Include
  • Select MySQL servers in your cluster that will be included in the load balancing set.

Import MaxScale

If you already have MaxScale installed in your setup, you can easily import it into ClusterControl to benefit from health monitoring and access to MaxAdmin – MaxScale’s CLI from the same interface you use to manage the database nodes. The only requirement is to have passwordless SSH configured between the ClusterControl node and the host where MaxScale is running.

Field  Description
MaxScale Address
  • The IP address of the existing MaxScale server.
CLI Port (Port for the Command Line Interface)
  • Port for the MaxAdmin command-line interface on the target server.

Processes

Manages external processes that are not part of the database system, e.g. a load balancer or an application server. ClusterControl will actively monitor these processes and make sure that they are always up and running by executing the check expression command.

Field  Description
Host/Group
  • Select the managed host.
Process Name
  • Enter the process name. E.g: “Apache 2”.
Start Command
  • OS command to start the process. E.g: /usr/sbin/apache2 -DFOREGROUND.
Pidfile
  • Full path to the process identifier file. E.g: /var/run/apache2/apache2.pid.
GREP Expression
  • OS command to check the existence of the process. The command must return 0 for true, and everything else for false. E.g: pidof apache2.
Remove
  • Removes the managed process from the list of processes managed by ClusterControl.
Deactivate
  • Disables the selected process.

Schemas and Users

Manages database schemas and users’ privileges.

Users

Shows a summary of MySQL users and privileges for the cluster. All of the changes are automatically synced to all database nodes in the cluster. For master-slave setup, ClusterControl will create the schema and user on the active master.

You can filter the list by username, hostname, database, or table in the text box. Click on Edit to update the existing user or Drop User to remove the existing user. Click on Create New User to open the user creation wizard:

Field  Description
Username
  • MySQL username.
Password
  • Password for Username. The minimum requirement is 4 characters.
Hostname
  • Hostname or IP address of the user or client. Wildcard (%) is permitted.
Max Queries Per Hour
  • Available if you click Show Advanced Options. Maximum queries this user can perform in an hour. Default is 0 (unlimited).
Max Updated Per Hour
  • Available if you click Show Advanced Options. Maximum update operations this user can perform in an hour. Default is 0 (unlimited).
Max Connections Per Hour
  • Available if you click Show Advanced Options. Maximum connections allowed for this user in an hour. Default is 0 (unlimited).
Max User Connections
  • Available if you click Show Advanced Options. Maximum connections allowed for this user. Default is 0 (unlimited).
Requires SSL
  • Available if you click Show Advanced Options. Toggle on the option if this user must be authenticated using SSL. Default is false.
Privileges
  • Specify the privilege for this user. If the Privileges text box is active, it will list out all possible privileges on the server.
  • Specify the database or table name. It can be in *.*, {database_name}, {database_name}.* or {database_name}.{table_name} format.
Add Statement
  • Add another Privileges statement builder entry for this user.

Inactive Users

Shows all accounts across clusters that are not been used since the last server restart. The server must have been running for at least 1 hour to check for inactive accounts.

You can drop particular accounts by clicking the Drop User button to initiate the action.

Import Database Dumpfile

Upload the schema and the data files to the selected database node. Currently, only mysqldump is supported and must not contain sub-directories. The following formats are supported:

  • dumpfile.sql

  • dumpfile.sql.gz

  • dumpfile.sql.bz2

Field Description
Import dumpfile on
  • Perform import operation on the selected database node.
Import dumpfile to database
  • Specify the target database.
Specify path to dumpfile
  • The dump file must be located on the ClusterControl server.

Create Database

Creates a database in the cluster:

Field  Description
Database Name
  • Enter the name of the database to be created.
Create Database
  • Creates the database. ClusterControl will ensure the database exists on all nodes in the cluster.

Upgrades

Performs minor software upgrades for database and load balancer software, for example from MySQL 5.7.x to MySQL 5.7.y in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor.

For a master-slave replication setup (MySQL/MariaDB Replication), ClusterControl will only perform the upgrade on the slaves. Once the upgrading job on the slaves successfully completed, you shall promote an upgraded slave as the new master and repeat the same upgrade process once more for the former master (which already demoted as a slave). To promote a slave, go to Nodes→ pick an upgraded slave → Promote Slave.

Attention

Database major version upgrade is not supported by ClusterControl. Major version upgrade has to be performed manually as it involves some risky operations like database package removal, configuration compatibility concern, connectors compatibility, etc.

Field  Description
Upgrade
  • Upgrades are performed online on one node at a time. The node will be stopped, then the software will be updated, and then the node will be started again. If a node fails to upgrade, the upgrade process is aborted and manual intervention is required to recover or reinstall the node.
  • If the database-related software is installed from the package repository, clicking on this will trigger an upgrade job using the corresponding package manager.
  • Upgrades should only be performed when it is as little traffic as possible on the cluster.
Check for New Packages
  • Triggers a job to check for any new versions of database-related packages. It is recommended to perform this operation before performing an actual upgrade.
  • You should see a list of packages under the Available Packages, and the bold lines are the ones that can be updated.
Select Nodes to Upgrade
  • Toggle all nodes that you want to upgrade.

Developer Studio

Provides functionality to create Advisors, Auto Tuners, or Mini Programs right within your web browser based on ClusterControl DSL. The DSL syntax is based on JavaScript, with extensions to provide access to ClusterControl’s internal data structures and functions. The DSL allows you to execute SQL statements, run shell commands/programs across all your cluster hosts, and retrieve results to be processed for advisors/alerts or any other actions. Developer Studio is a development environment to quickly create, edit, compile, run, test, debug, and schedule your JavaScript programs.

Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions, or even cluster-wide automation tasks based on the state of your servers or databases.

ClusterControl comes with a set of basic advisors that include rules and alerts on security settings, system checks (NUMA, Disk, CPU), queries, InnoDB, connections, Performance_Schema, configuration, NDB memory usage, and so on. The advisors are open source under MIT license, and publicly available at GitHub. Through the Developer Studio, it is easy to import new advisors as a JS bundle or export your own for others to try out.

Field  Description
New
  • Name – Specify the file name including folders if you need them. E.g. shared/helpers/cmon.js will create all appropriate folders if they don’t exist yet.

  • File content:
    • Empty file – Create a new empty file.
    • Template – Create a new file containing skeleton code for monitoring.
    • Generic MySQL Template – Create a new file containing a skeleton code for generic MySQL monitoring.
Import
Export
  • Exports the advisor’s directory to a <span class="pre">.tar.gz</span> format. The exported file can be imported to Developer Studio through the ClusterControl → Manage → Developer Studio → Import function.
Advisors
  • Opens the Advisor list page. See Advisors.
Save
  • Saves the file.
Move
  • Moves the file around between different subdirectories.
Remove
  • Removes the script.
Compile
  • Compiles the script.
Compile and run
  • Compile and run the script. The output appears under the MessageGraph, or Raw response tab underneath the editor.
  • The arrow next to the Compile and Run button allows us to change settings for a script and for example, pass some arguments to the main() function.
Schedule Advisor
  • Schedules the script as an advisor.

Tags

Note

This feature is introduced in ClusterControl v1.8.2.

Use tags to allow filtering and searching for clusters. Each cluster can have zero or many tags to help keep the clusters organized. Note that special characters like spaces, tabs and dollar signs are not supported. The created tags can be filtered while looking up the clusters in the Database Cluster list page, by clicking on the magnifier glass icon on the top menu (next to the “Database Clusters” string).

To remove a tag, simply click on the x next to every created tag string.

Tags created here can also be used with ClusterControl CLI using the --with-tags or −−without-tags flag. See s9s-cluster.

Was this article helpful to you? Yes No