Table of Contents
Hosts
Lists of hosts being managed by ClusterControl for the specific cluster. This includes:
- ClusterControl node
- MySQL nodes (Galera, replication, group replication, and standalone)
- MySQL slave nodes (Galera and replication)
- garbd nodes (Galera)
- HAProxy nodes (Galera and MySQL Cluster)
- Keepalived nodes
- MaxScale nodes
- ProxySQL nodes
- MySQL API nodes (MySQL Cluster)
- Management nodes (MySQL Cluster)
- Data nodes (MySQL Cluster)
The list also contains the respective hosts’ operating system, host status, ping time, and process ID of the main role.
Configurations
Manages the configuration files of your database, HAProxy, and Garbd nodes. For MySQL database, changes can be persisted to database variables across one node or a group of nodes at once, dynamic variables are changed directly without a restart.
ClusterControl does not store configuration changes history so there is no versioning at the moment. Only one version exists at one time. It imports the latest configuration files every 30 minutes and overwrites it in the CMON database. This limitation will be improved in the upcoming release where ClusterControl shall support configuration versioning with dynamic import interval.
Field | Description |
---|---|
Save |
|
Import |
|
Change/Set Parameter |
|
If you change a global system variable, the value is remembered and used ONLY for new connections.
Base Template Files
All services configured by ClusterControl use a base configuration template available under /usr/share/cmon/templates
on the ClusterControl node. You can directly modify the file to suit your deployment policy however, this directory will be replaced after a package upgrade.
To make sure your custom configuration template files persist across upgrades, store your template files under the /etc/cmon/templates
directory (ClusterControl 1.6.2 and later). When ClusterControl loads up the template file for deployment, files under /etc/cmon/templates
will always have higher priority over the files under /usr/share/cmon/templates
. If two files having identical names exist on both directories, the one located under /etc/cmon/templates
will be used.
The following are template files provided by ClusterControl related to MySQL/MariaDB:
Filename. | Description |
---|---|
config.ini.mc |
MySQL Cluster configuration file (config.ini) |
garbd.cnf |
Galera arbitrator daemon (garbd) configuration file. |
haproxy.cfg |
HAProxy configuration template for Galera Cluster. |
haproxy_rw_split.cfg |
HAProxy configuration template for read-write splitting. |
keepalived-1.2.7.conf |
Legacy Keepalived configuration file (pre 1.2.7). This is deprecated. |
keepalived.conf |
Keepalived configuration file. |
keepalived.init |
Keepalived init script. |
MaxScale_2.2_template.cnf |
MaxScale 2.2 configuration template. |
MaxScale_template.cnf |
MaxScale configuration template. |
my.cnf.galera |
MySQL configuration template for Galera Cluster. |
my57.cnf.galera |
MySQL configuration template for Galera Cluster on MySQL 5.7. |
my-cnf-backup-secrets.cnf |
MySQL configuration template for the generated backup user. |
my-root.cnf |
Config file used by MySQL in order to perform log rotation. |
my.cnf.80-pxc |
MySQL configuration template for Percona XtraDB Cluster 8.0. |
my.cnf.galera |
MySQL configuration template for Galera Cluster. |
my.cnf.grouprepl |
MySQL configuration template for MySQL Group Replication. |
my.cnf.gtid_replication |
MySQL configuration template for MySQL Replication with GTID. |
my.cnf.mdb10x-galera |
MariaDB configuration template for MariaDB Galera 10 and later. |
my.cnf.mdb10x-replication |
MariaDB configuration template for MariaDB Replication 10 and later. |
my.cnf.mdb55-galera |
MariaDB configuration template for MariaDB Galera 5.5. |
my.cnf.mysqlcluster |
MySQL configuration template for MySQL Cluster. |
my.cnf.ndb-8.0 |
MySQL configuration template for MySQL Cluster 8.0. |
my.cnf.pxc55 |
MySQL configuration template for Percona XtraDB Cluster 5.5. |
my.cnf.repl57 |
MySQL configuration template for MySQL Replication 5.7. |
my.cnf.repl80 |
MySQL configuration template for MySQL Replication 8.0. |
my.cnf.replication |
MySQL configuration template for MySQL/MariaDB without MySQL’s GTID. |
my57.cnf.galera |
MySQL configuration template for MySQL Galera 5.7. |
mysqlchk.galera |
MySQL health check script template for Galera Cluster. |
mysqlchk.mysql |
MySQL health check script template for standalone MySQL server. |
mysqlchk_rw_split.mysql |
MySQL health check script template for MySQL Replication (master-slave). |
mysqlchk_xinetd |
Xinetd configuration template for MySQL health check. |
mysqld.service.override |
Systemd unit file template for MySQL service. |
mysql_logrotate |
Log rotation configuration template for MySQL. |
proxysql_galera_checker.sh |
ProxySQL health check script for Galera Cluster. |
proxysql_logrotate |
Log rotation configuration template for ProxySQL. |
proxysql_template.cnf |
ProxySQL configuration template. |
Starting from ClusterControl 1.9.7 (September 2023), ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. Note that the GUI v1 is considered a feature-freeze product with no future development. All new developments will be happening on ClusterControl GUI v2. See User Guide (GUI v2).
Dynamic Variables
There are a number of configuration variables that are configurable dynamically by ClusterControl. These variables are represented with a capital letter enclosed by @
, for example, @DATADIR@
. The following shows the list of variables supported by ClusterControl for MySQL-based clusters:
Variable | Description |
---|---|
@BASEDIR@ |
Default is /usr . Value specified during cluster deployment takes precedence. |
@DATADIR@ |
Default is /var/lib/mysql . Value specified during cluster deployment takes precedence. |
@MYSQL_PORT@ |
The default is 3306. Value specified during cluster deployment takes precedence. |
@BUFFER_POOL_SIZE@ |
Automatically configured based on the host’s RAM. |
@LOG_FILE_SIZE@ |
Automatically configured based on the host’s RAM. |
@LOG_BUFFER_SIZE@ |
Automatically configured based on the host’s RAM. |
@BUFFER_POOL_INSTANCES@ |
Automatically configured based on the host’s CPU. |
@SERVER_ID@ |
Automatically generated based on member’s server-id . |
@SKIP_NAME_RESOLVE@ |
Automatically configured based on MySQL variables. |
@MAX_CONNECTIONS@ |
Automatically configured based on the host’s RAM. |
@ENABLE_PERF_SCHEMA@ |
Default is disabled. Value specified during cluster deployment takes precedence. |
@WSREP_PROVIDER@ |
Automatically configured based on the Galera vendor. |
@HOST@ |
Automatically configured based on hostname/IP address. |
@GCACHE_SIZE@ |
Automatically configured based on disk space. |
@SEGMENTID@ |
Default is 0. Value specified during cluster deployment takes precedence. |
@WSREP_CLUSTER_ADDRESS@ |
Automatically configured based on members in the cluster. |
@WSREP_SST_METHOD@ |
Automatically configured based on the Galera vendor. |
@BACKUP_USER@ |
Default is backupuser . |
@BACKUP_PASSWORD@ |
Automatically generated and configured for backupuser . |
@GARBD_OPTIONS@ |
Automatically configured based on garbd options. |
@READ_ONLY@ |
Automatically configured based on replication role. |
@SEMISYNC@ |
Default is disabled. Value specified during cluster deployment takes precedence. |
@NDB_CONNECTION_POOL@ |
Automatically configured based on the host’s CPU. |
@NDB_CONNECTSTRING@ |
Automatically configured based on members in the MySQL cluster. |
@LOCAL_ADDRESS@ |
Automatically configured based on the host’s address. |
@GROUP_NAME@ |
Default is grouprepl . Value specified during cluster deployment takes precedence. |
@PEERS@ |
Automatically configured based on members in the Group Replication cluster. |
Load Balancer
Manages deployment of load balancers (HAProxy, ProxySQL, and MaxScale), virtual IP address (Keepalived), and Garbd. For the Galera Cluster, it is also possible to add the Galera arbitrator daemon (Garbd) through this interface.
ProxySQL
Introduced in v1.4.0 and exclusive for MySQL-based clusters. By default, ClusterControl deploys ProxySQL in read/write split mode – your read-only traffic will be sent to slaves while your writes will be sent to a writable master by creating two host groups. ProxySQL will also work together with the new automatic failover mechanism added in ClusterControl 1.4.0 – once failover happens, ProxySQL will detect the new writable master and route writes to it. It all happens automatically, without any user intervention.
Deploy ProxySQL
Choose where to install
Specify the host that you want to install ProxySQL. You can use an existing database server or use another new host by specifying the hostname or IPv4 address.
Field | Description |
---|---|
Server Address |
|
Port |
|
Select Version |
|
ProxySQL Configuration
Field | Description |
---|---|
Import Configuration |
|
Disable Firewall |
|
Disable AppArmor/SELinux |
|
Use Native Clustering |
|
ProxySQL User Credentials
Two ProxySQL users are required, one for administration and another one for monitoring. ClusterControl will create both during deployment. This section will be greyed out if you already have enabled Native Clustering for any existing ProxySQL node because the administration and monitoring users must be identical on all ProxySQL nodes.
Field | Description |
---|---|
Administration User |
|
Administration Password |
|
Monitor User |
|
Monitor Password |
|
Add database user
You can use an existing database user (created outside ProxySQL) or you can let ClusterControl create a new database user under this section. ProxySQL works in the middle, between application and backend MySQL servers, so the database users need to be able to connect from the ProxySQL IP address
Field | Description |
---|---|
Use existing DB User |
|
Create new DB User |
|
The user must exist on the database nodes and is allowed to access from the ProxySQL server.
Select instances to balance
Choose which server to be included in the load balancing set.
Field | Description |
---|---|
Server Instance |
|
Include |
|
Max Replication Lag |
|
Max Connection |
|
Weight |
|
Implicit Transactions
Field | Description |
---|---|
Are you using implicit transactions? |
|
Import ProxySQL
If you already have ProxySQL installed in your setup, you can easily import it into ClusterControl to benefit from monitoring and management of the instance.
Existing ProxySQL location
Field | Description |
---|---|
Server Address |
|
Listening Port |
|
ProxySQL Configuration
Field | Description |
---|---|
Import Configuration |
|
ProxySQL User Credentials
Field | Description |
---|---|
Administration User |
|
Administration Password |
|
HAProxy
Installs and configures an HAProxy instance. ClusterControl will automatically install and configure HAProxy, install mysqlcheck
script (for MySQL health checks) on each of the database nodes as part of xinetd
service and start the HAProxy service. Once the installation is complete, MySQL will listen on Listen Port (3307 by default) on the configured node.
This feature is idempotent, you can execute it as many times as you want and it will always reinstall everything as configured.
Deploy HAProxy
Field | Description |
---|---|
Server Address |
|
Policy |
|
Listen Port (Read/Write) |
|
Install for read/write splitting (master-slave replication) |
|
Installation Settings
Field | Description |
---|---|
Overwrite Existing /usr/local/sbin/mysqlchk on targets |
|
Disable Firewall? |
|
Disable SELinux/AppArmor? |
|
Advanced Settings
Field | Description |
---|---|
Stats Socket |
|
Admin Port |
|
Admin User |
|
Admin Password |
|
Backend Name |
|
Timeout Server (seconds) |
|
Timeout Client (seconds) |
|
Max Connections Frontend |
|
Max Connections Backend/per instance |
|
xinetd allow connections from |
|
Server instances in the load balancer
Field | Description |
---|---|
Include |
|
Role |
|
Connection Address |
|
Import HAProxy
Field | Description |
---|---|
HAProxy Address |
|
cmdline |
|
Port |
|
Admin User |
|
Admin Password |
|
LB Name |
|
HAProxy Config |
|
Stats Socket |
|
You will need an admin user/password set in the HAProxy configuration otherwise you will not see any HAProxy stats.
Keepalived
Keepalived requires two or more HAProxy, ProxySQL or MaxScale instances in order to provide virtual IP address failover. By default, the virtual IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will be automatically failed over to ‘Keepalived 2’ accordingly.
Deploy Keepalived
Field | Description |
---|---|
Load balancer type |
|
Keepalived 1 |
|
Add Keepalived Instance |
|
Remove Keepalived Instance |
|
Virtual IP |
|
Network Interface |
|
Import Keepalived
Field | Description |
---|---|
Keepalived 1 |
|
Add Keepalived Instance |
|
Remove Keepalived Instance |
|
Virtual IP |
|
Garbd
Exclusive for Galera Cluster. Galera arbitrator daemon (garbd) can be installed to avoid network partitioning or split-brain scenarios.
Deploy Garbd
Field | Description |
---|---|
Server Address |
|
CmdLine |
|
We do not support and is not allowed to deploy garbd on a server where ClusterControl is also running and hosted. There is a tendency that the existing MySQL packages will be removed which is managed by the software packaging tools. However, you are allowed and can import an existing garbd (that means if you installed it manually) if it’s running on a ClusterControl host.
Import Garbd
Field | Description |
---|---|
Garbd Address |
|
Port |
|
CmdLine |
|
MaxScale
MaxScale is an intelligent proxy that allows the forwarding of database statements to one or more database servers using complex rules, a semantic understanding of the database statements and the roles of the various servers within the backend cluster of databases.
You can deploy or import the existing MaxScale node as a load balancer and query router for your Galera Cluster, MySQL/MariaDB replication and MySQL Cluster. For new deployment using ClusterControl, by default it will create two production services:
- RW – Implements read-write split access.
- RR – Implements round-robin access.
To remove MaxScale, go to ClusterControl → Nodes → MaxScale node and click on the
Deploy MaxScale
Use this wizard to install MariaDB MaxScale as MySQL/MariaDB load balancer.
Field | Description |
---|---|
Server Address |
|
MaxScale Admin Username |
|
MaxScale Admin Password |
|
MaxScale MySQL Username |
|
MaxScale MySQL Password |
|
Threads |
|
CLI Port (Port for command line) |
|
RR Port (Port for round robin listener) |
|
RW Port (Port for read/write split listener) |
|
Debug Port (Port for debug information) |
|
Include |
|
Import MaxScale
If you already have MaxScale installed in your setup, you can easily import it into ClusterControl to benefit from health monitoring and access to MaxAdmin – MaxScale’s CLI from the same interface you use to manage the database nodes. The only requirement is to have passwordless SSH configured between the ClusterControl node and the host where MaxScale is running.
Field | Description |
---|---|
MaxScale Address |
|
CLI Port (Port for the Command Line Interface) |
|
Processes
Manages external processes that are not part of the database system, e.g. a load balancer or an application server. ClusterControl will actively monitor these processes and make sure that they are always up and running by executing the check expression command.
Field | Description |
---|---|
Host/Group |
|
Process Name |
|
Start Command |
|
Pidfile |
|
GREP Expression |
|
Remove |
|
Deactivate |
|
Schemas and Users
Manages database schemas and users’ privileges.
Users
Shows a summary of MySQL users and privileges for the cluster. All of the changes are automatically synced to all database nodes in the cluster. For master-slave setup, ClusterControl will create the schema and user on the active master.
You can filter the list by username, hostname, database, or table in the text box. Click on Edit to update the existing user or Drop User to remove the existing user. Click on Create New User to open the user creation wizard:
Field | Description |
---|---|
Username |
|
Password |
|
Hostname |
|
Max Queries Per Hour |
|
Max Updated Per Hour |
|
Max Connections Per Hour |
|
Max User Connections |
|
Requires SSL |
|
Privileges |
|
Add Statement |
|
Inactive Users
Shows all accounts across clusters that are not been used since the last server restart. The server must have been running for at least 1 hour to check for inactive accounts.
You can drop particular accounts by clicking the Drop User button to initiate the action.
Import Database Dumpfile
Upload the schema and the data files to the selected database node. Currently, only mysqldump is supported and must not contain sub-directories. The following formats are supported:
-
dumpfile.sql
-
dumpfile.sql.gz
-
dumpfile.sql.bz2
Field | Description |
---|---|
Import dumpfile on |
|
Import dumpfile to database |
|
Specify path to dumpfile |
|
Create Database
Creates a database in the cluster:
Field | Description |
---|---|
Database Name |
|
Create Database |
|
Upgrades
Performs minor software upgrades for database and load balancer software, for example from MySQL 5.7.x to MySQL 5.7.y in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor.
For a master-slave replication setup (MySQL/MariaDB Replication), ClusterControl will only perform the upgrade on the slaves. Once the upgrading job on the slaves successfully completed, you shall promote an upgraded slave as the new master and repeat the same upgrade process once more for the former master (which already demoted as a slave). To promote a slave, go to Nodes→ pick an upgraded slave → Promote Slave.
Database major version upgrade is not supported by ClusterControl. Major version upgrade has to be performed manually as it involves some risky operations like database package removal, configuration compatibility concern, connectors compatibility, etc.
Field | Description |
---|---|
Upgrade |
|
Check for New Packages |
|
Select Nodes to Upgrade |
|
Developer Studio
Provides functionality to create Advisors, Auto Tuners, or Mini Programs right within your web browser based on ClusterControl DSL. The DSL syntax is based on JavaScript, with extensions to provide access to ClusterControl’s internal data structures and functions. The DSL allows you to execute SQL statements, run shell commands/programs across all your cluster hosts, and retrieve results to be processed for advisors/alerts or any other actions. Developer Studio is a development environment to quickly create, edit, compile, run, test, debug, and schedule your JavaScript programs.
Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions, or even cluster-wide automation tasks based on the state of your servers or databases.
ClusterControl comes with a set of basic advisors that include rules and alerts on security settings, system checks (NUMA, Disk, CPU), queries, InnoDB, connections, Performance_Schema, configuration, NDB memory usage, and so on. The advisors are open source under MIT license, and publicly available at GitHub. Through the Developer Studio, it is easy to import new advisors as a JS bundle or export your own for others to try out.
Field | Description |
---|---|
New |
|
Import |
|
Export |
|
Advisors |
|
Save |
|
Move |
|
Remove |
|
Compile |
|
Compile and run |
|
Schedule Advisor |
|
Tags
This feature is introduced in ClusterControl v1.8.2.
Use tags to allow filtering and searching for clusters. Each cluster can have zero or many tags to help keep the clusters organized. Note that special characters like spaces, tabs and dollar signs are not supported. The created tags can be filtered while looking up the clusters in the Database Cluster list page, by clicking on the magnifier glass icon on the top menu (next to the “Database Clusters” string).
To remove a tag, simply click on the
Tags created here can also be used with ClusterControl CLI using the --with-tags
or −−without-tags
flag. See s9s-cluster.