Table of Contents
Maintenance Release: August 11th, 2023
- Build:
- clustercontrol-controller-1.9.6-6467
- Frontend (UI):
- Address an issue where backups (records) with PgBackRest were not deleted/removed properly with the retention period (CLUS-2511).
- Address an issue deploying MaxScale on MariaDB (CLUS-2441).
- Address an issue where partial backups with Xtrabackups created full backups (CLUS-2449).
- Address an issue with backup retention where backups were not removed (CLUS-2503).
- Address an issue with MongoDB when enabling SSL and agent-based monitoring stopped working (CLUS-2428).
Maintenance Release: August 7th, 2023
- Build:
- clustercontrol-1.9.6-8624
- Frontend (UI):
- Address an issue with the MariaDB deployment wizard showing an v10.11 option which will be supported in the upcoming v1.9.7 (CLUS-2248).
Maintenance Release: July 31st, 2023
- Build:
- clustercontrol-controller-1.9.6-6447
- clustercontrol-1.9.6-8620
- Controller:
- Address an issue deploying PgBouncer when trying to get the default socket directory from PostgreSQL. The new socket directory is
/tmp
. (CLUS-2245) - Address an issue when PITR restore hung with pg_basebackup (CLUS-2374). A new CMON configuration can used to change the default time to wait which is 30 minutes.
postgresql_wait_recovery_on_restoration_timeout
- Address an issue with the Redis Sentinel cluster status when a backup failed to be restored (CLUS-2429).
- Address an issue with MongoDB and PBM where a failed backup could delete previous backups (CLUS-2418).
- Address an issue where the MongoDB Prometheus cmon exporter user’s password was logged in plain text (CLUS-2392).
- Address an issue deploying PgBouncer when trying to get the default socket directory from PostgreSQL. The new socket directory is
- Frontend (UI):
- Address a minor issue with the user registration form (CLUS-2480).
- Address an issue with the MariaDB deployment wizard missing the v10.11 option (CLUS-2248).
Maintenance Release: July 3rd, 2023
- Build:
- clustercontrol-controller-1.9.6-6408
- clustercontrol-1.9.6-8611
- Controller:
- Address various issues with Redis Sentinel clusters:
- Address an issue where the ‘internal IP’ was ignored and the external IP was used instead (CLUS-2395).
- Address an issue when importing ProxySQL users that have an existing user with a
%
wildcard hostname (Reverting fix for CLUS-2270). - Address an issue with PITR using mysqldump with Percona PXC Galera 5.7 (CLUS-2401).
- Address an issue where the MongoDB Prometheus cmon exporter user’s password was logged in plain text (CLUS-2392).
- Address various issues with Redis Sentinel clusters:
- Frontend (UI):
- Address an issue with missing the ‘monitor’ username and password when importing ProxySQL CCv1 (CLUS-2260).
- Address an issue with the UI performance when loading up a cluster list when there are a substantial number of clusters (CLUS-2491).
NoteAdd to the(webroot)/clustercontrol/bootstrap.php
file:define('SKIP_CLUSTER_CHECK_ACCESS', true);
Maintenance Release: June 26th, 2023
- Build:
- clustercontrol-1.9.6-8607
- Frontend (UI):
- Address an issue with data retention size (
storage.tsdb.retention.size
) missing with Prometheus – CCv1 (CLUS-2223).
- Address an issue with data retention size (
Maintenance Release: June 19th, 2023
- Build:
- clustercontrol-controller-1.9.6-6382
- Controller:
- Address an issue when deploying MongoDB Replicaset on GCP (CLUS-2218).
Maintenance Release: June 11th, 2023
- Build:
- clustercontrol-1.9.6-8601
- Frontend (UI):
- Address an issue on how Cluster to Cluster replication clusters are sorted with Active vs Read-only state (CLUS-1881).
- Address an issue with offline installation on Ubuntu Jammy and the setup-cc.sh script.
Maintenance Release: June 2nd, 2023
- Build:
- clustercontrol-controller-1.9.6-6358
- Controller:
- Address a startup issue with Redis Sentinel and the systemd service configuration (CCV2-690, CLUS-2213).
- Address an issue rebuilding a MSSQL Server replica when all nodes are in sync (CLUS-2247).
- Address an issue to prevent recreating existing users when the global
%
wild-pattern is used with ProxySQL (CLUS-1471). - Address an issue with the cluster state being ‘Unknown’ when importing a Redis Sentinel cluster that has TLS/SSL enabled (CLUS-2220).
- Address an issue parsing MySQL user grants with MySQL Oracle 8 open using DB Users and Schemas (CLUS-2245).
Maintenance Release: May 19th, 2023
- Build:
- clustercontrol-controller-1.9.6-6328
- clustercontrol-1.9.6-8578
- Controller:
- Address a potential CMON memory leak when managing the Redis Sentinel cluster (CLUS-2155).
- Address an issue deploying PostgreSQL on Ubuntu 22.04 (CLUS-2174).
- Address an issue with switchovers for PostgreSQL. The proper use of
CHECKPOINT
has been updated (CLUS-2101). - Address an issue where changes to the Prometheus configuration was not persisted (CLUS-2080).
- Frontend (UI):
- Address an issue when retrieving data points for the dashboards to improve situations when the pages failed to load properly (CLUS-1901).
- PostgreSQL v10 is now deprecated as a deployment option.
- Build from source for HAProxy is now deprecated / not available.
Maintenance Release: May 3rd, 2023
- Build:
- clustercontrol-controller-1.9.6-6307
- clustercontrol-1.9.6-8564
- Controller:
- Address an issue where NaN is shown with NDB Cluster on the overview page (CLUS-2122).
- Address an issue to add additional default options (single-transaction and quick) to
mysqldump
(CLUS-1979). - Address an issue to include the man pages for CMON HA (CLUS-2136).
- Address an issue to silence
invalid configuration
alarms for HAProxy when a backend node is unavailable (CLUS-2102).
- Frontend (UI):
- Address an issue to deploy HAProxy when switching from Database nodes to PgBouncer nodes in the deployment wizard (CLUS-2002).
Initial Release: April 20th, 2023
- Build:
- clustercontrol-1.9.6-8556
- clustercontrol-controller-1.9.6-6288
- clustercontrol-cloud-1.9.6-389
- clustercontrol-notifications-1.9.6-330
- clustercontrol-ssh-1.9.6-145
In this release, we have prioritized improving our high availability setup (CMON HA) for ClusterControl. It is an active/passive solution using the Raft protocol between a CMON controller ‘leader’ and ‘followers’.
A CMON controller is the primary process in ClusterControl and it’s the ‘control plane’ to manage and monitor the databases.
To use CMON HA, you will basically need to do the following:
- Use a shared MySQL database cluster that all CMON controllers will use as the shared storage. Our recommendation is to use a MySQL Galera cluster which provides additional high availability of the database cluster.
- Install several ClusterControl nodes, at minimum 3 nodes for the quorum/voting process to work.
- These should be set up to connect/share the same CMON database and have identical
rpc_key
(cmon.cnf
) andRPC_TOKEN
(bootstrap.php
) secrets. Add the IP of the host of the Controller node to theRPC_BIND_ADDRESSES
parameter in the/etc/default/cmon
file on all CMON nodes. - Next, enable CMON HA using the s9s CLI on one of the CMON controller nodes. On the selected ‘leader’ CMON controller node enable CMON HA using the s9s CLI.
s9s controller —enable-cmon-ha
- Restart the other CMON controller ‘follower’ nodes.
systemctl restart cmon
- Verify that you have a leader and few followers using the s9s CLI
s9s controller —list —long
Finally in order to set up the web application to handle ‘leader’ failures transparently we can use a load balancer like HAProxy which will failover to a ‘leader’ node that provides the ‘main/active’ web application. This involves installing HAProxy and configuring it to health check the CMON nodes, which can be achieved by using xinetd and a custom script to call on the CMON RPC API interface.
Please see the online documentation on CMON HA for more detailed setup instructions.
- Build: