Maintenance Release: June 2nd, 2023
- Address a startup issue with Redis Sentinel and the systemd service configuration (CCV2-690, CLUS-2213).
- Address an issue rebuilding a MSSQL Server replica when all nodes are in sync (CLUS-2247).
- Address an issue to prevent recreating existing users when the global
%wild-pattern is used with ProxySQL (CLUS-1471).
- Address an issue with the cluster state being ‘Unknown’ when importing a Redis Sentinel cluster that has TLS/SSL enabled (CLUS-2220).
- Address an issue parsing MySQL user grants with MySQL Oracle 8 open using DB Users and Schemas (CLUS-2245).
Maintenance Release: May 19th, 2023
- Address a potential CMON memory leak when managing the Redis Sentinel cluster (CLUS-2155).
- Address an issue deploying PostgreSQL on Ubuntu 22.04 (CLUS-2174).
- Address an issue with switchovers for PostgreSQL. The proper use of
CHECKPOINThas been updated (CLUS-2101).
- Address an issue where changes to the Prometheus configuration was not persisted (CLUS-2080).
- Frontend (UI):
- Address an issue when retrieving data points for the dashboards to improve situations when the pages failed to load properly (CLUS-1901).
- PostgreSQL v10 is now deprecated as a deployment option.
- Build from source for HAProxy is now deprecated / not available.
Maintenance Release: May 3rd, 2023
- Address an issue where NaN is shown with NDB Cluster on the overview page (CLUS-2122).
- Address an issue to add additional default options (single-transaction and quick) to
- Address an issue to include the man pages for CMON HA (CLUS-2136).
- Address an issue to silence
invalid configurationalarms for HAProxy when a backend node is unavailable (CLUS-2102).
- Frontend (UI):
- Address an issue to deploy HAProxy when switching from Database nodes to PgBouncer nodes in the deployment wizard (CLUS-2002).
Initial Release: April 20th, 2023
In this release, we have prioritized improving our high availability setup (CMON HA) for ClusterControl. It is an active/passive solution using the Raft protocol between a CMON controller ‘leader’ and ‘followers’.
A CMON controller is the primary process in ClusterControl and it’s the ‘control plane’ to manage and monitor the databases.
To use CMON HA, you will basically need to do the following:
- Use a shared MySQL database cluster that all CMON controllers will use as the shared storage. Our recommendation is to use a MySQL Galera cluster which provides additional high availability of the database cluster.
- Install several ClusterControl nodes, at minimum 3 nodes for the quorum/voting process to work.
- These should be set up to connect/share the same CMON database and have identical
bootstrap.php) secrets. Add the IP of the host of the Controller node to the
RPC_BIND_ADDRESSESparameter in the
/etc/default/cmonfile on all CMON nodes.
- Next, enable CMON HA using the s9s CLI on one of the CMON controller nodes. On the selected ‘leader’ CMON controller node enable CMON HA using the s9s CLI.
s9s controller —enable-cmon-ha
- Restart the other CMON controller ‘follower’ nodes.
systemctl restart cmon
- Verify that you have a leader and few followers using the s9s CLI
s9s controller —list —long
Finally in order to set up the web application to handle ‘leader’ failures transparently we can use a load balancer like HAProxy which will failover to a ‘leader’ node that provides the ‘main/active’ web application. This involves installing HAProxy and configuring it to health check the CMON nodes, which can be achieved by using xinetd and a custom script to call on the CMON RPC API interface.
Please see the online documentation on CMON HA for more detailed setup instructions.