Table of Contents
Maintenance Release: April 8th, 2020
- Build:
- clustercontrol-1.7.5-6810
- clustercontrol-notifications-1.7.5-249
- clustercontrol-cloud-1.7.5-239
- Frontend (UI):
- Opsgenie Integration: A fix to allow the user to specify a region when setting up the integration.
- Notifications:
- Opsgenie Integration: Fixed an issue resulting in the error
Failed to parse request body: parse error: expected string offset 11 of teams
. - Fixed an issue handling region.
- Improved and fixed a bug with
http_proxy
handing. Now, a http_proxy/https_proxy can be specified /etc/proxy.env or /etc/environment.
- Opsgenie Integration: Fixed an issue resulting in the error
- Cloud:
- Improved and fixed a bug with http_proxy handing. Now, a
http_proxy
/https_proxy
can be specified/etc/proxy.env
or/etc/environment
.
- Improved and fixed a bug with http_proxy handing. Now, a
Maintenance Release: April 7th, 2020
- Build:
- clustercontrol-controller-1.7.5-3844
- Controller:
-
- HAProxy: Using ports 5433 (read/write) and 5434 (read-only) by default for PostgreSQL.
- HAProxy: PostgreSQL – Read/write splitting was not set up when installing HAProxy from the S9s CLI.
- HAProxy: Installing HAProxy attempted to use the Backup Verification Server too.
- PostgreSQL: Never stopping ‘Failover to a New Master’ job + cluster status bugfix (it must be in Cluster Failed state when there is no writable master).
- PostgreSQL: Dashboards: Failed to deploy agents in some cases on the Data nodes.
- PostgreSQL: Import
recovery.conf
/postgres.auto.conf
and can now be edited in the UI. - PostgreSQL:
pg_hba.conf
is now editable on UI. - PostgreSQL:
pg_basebackup
restore: first undo any previous PITR related options before restoring. - PostgreSQL: Failed to Start Node for PostgreSQL.
- PostgreSQL: Fix
pg_ctl
status retval and output handling. - PostgreSQL: Rebuild replication slave did not reset
restore_command.
- Percona Server 8.0: Verification of partial backup failed.
- ProxySQL: Could not edit backend server properties in ProxySQL for Galera.
Maintenance Release: April 1st, 2020
- Build:
- clustercontrol-controller-1.7.5-3828
- clustercontrol-notifications-1.7.5-243
- Notifications:
- Fixed an issue with Opsgenie integration, got error
Failed to parse request body: parse error: expected string offset 11 of teams.
- cmon-events does not read MySQL connection details from
/etc/cmon-events.cnf.
- Password handling: Using a special character was rejected by the cmon-events service.
- Remember to restart the service:
service cmon-events restart
orsystemctl restart cmon-events
after the upgrade.
- Fixed an issue with Opsgenie integration, got error
- Controller:
- Spelling fix for cluster action Schedule and Disable Maintenance Mode.
- PostgreSQL: Verify Backup, recreate missing
datadir
and config file if missing on the Backup Verification Server. - PostgreSQL: Failed to Start Node for PostgreSQL.
- PostgreSQL: Failed to PITR
pg_basebackup
becausestandby_mod
e was ON, preventing the node from leaving recovery. - PostgreSQL: Hide passwords from PostgreSQL logs.
- Error Reporting: Fixed a number of small issues.
Maintenance Release: March 31st, 2020
- Build:
- clustercontrol-1.7.5-6794
- Frontend(UI):
- Spelling fix for cluster action Schedule and Disable Maintenance Mode.
Maintenance Release: March 30th, 2020
- Build:
- clustercontrol-controller-1.7.5-3819
- clustercontrol-1.7.5-6791
- Frontend (UI):
- PostgreSQL: Point-in-time recovery (PITR) – fixes when selecting stop time and timezone.
- PostgreSQL: Fixed and improved restore backup to show the correct options for
pg_basebackup
regarding PITR. - Cloud Deploy: Added missing references to our online documentation on how to create/add cloud credentials.
- Sync Clusters: Sync the UI view of clusters with the controller.
- Controller:
- PostgreSQL: Recovery of slaves will not commence if the master is down.
- PostgreSQL: Verify Backup now works when Install Software is enabled and Terminate Server is disabled.
- PostgreSQL: Promote failed when WAL replay is paused.
- PostgreSQL: Point-in-time recovery (PITR) fixes for
pg_basebackup
.
Notifications: Alarms raised by the controller are only sent once to each recipient.
- Limitations:
- PostgreSQL PITR:
- If no writes have been made after the backup, them PITR may fail.
- Specifying time too far in the future may cause issues too.
- We recommend using
pg_basebackup
in order to use PITR.
- PostgreSQL Backups (
pgbackrest
&pg_basebackup
):pgbackrest
has an archive_command that is not compatible withpg_basebackup
, which means e.g that apg_basebackup
cannot be restored using PITR on a PostgreSQL server configured with anarchive_command
configured forpgbackrest
.
- PostgreSQL PITR:
Maintenance Release: March 23rd, 2020
- Build:
- clustercontrol-controller-1.7.5-3797
- clustercontrol-1.7.5-6757
- Frontend (UI):
- Verify Backup: Specifying the temporary directory field is mandatory, but it is not used at all.
- Prometheus: Graph for disk usage is incomplete.
- Prometheus: Not possible to change Prometheus deployment options when deployment failed.
- PostgreSQL: Point in time recovery (PITR) depends on PostgreSQL archive_command. An archive command suitable for
PgBackRest
is not working for pg_basebackup. Now, PITR options are only shown for a backup method if the underlying archive command supports it. - PostgreSQL: Fixed timezone transformation for PITR.
- Query Monitor: Fixed bug saving settings.
- Overview/Node Graphs: In some circumstances the date range could be the same for From Date and To Date, resulting in zero data points and no graph displayed.
- Audit Log: The timestamp in the
auth.log
file is off by 1h (default UTC). - Error Reporting: A wrong Error Report Default Destination was shown.
- Controller (bugs fixed):
- ProxySQL: Version is not updated in Topology view.
- PostgreSQL: PG Master node fails if you Enable WAL archiving after promoting it.
- PostgreSQL: Verify
pg_basebackup
(potentially other pg backup methods too) fails. - PostgreSQL: Promoting a slave where a master cannot be determined or reached.
- PostgreSQL: Fixed an issue with
pg_basebackup
and multiple tablespaces (NOTE: encryption isn’t supported for multiple tablespaces). - PostgreSQL:
PgBackRest
with Auto Select backup host fails. - PostgreSQL: Restoring
PgBackRest
backup on PostgreSQL12 failed. - PostgreSQL: Make sure the recovery signal file is not present when enabling WAL to log archiving.
- PostgreSQL: Fallback to server version from configuration when the information is not available in the host instance.
- PostgreSQL: Verify WAL archive directory for log files before performing PITR.
- Query Monitor: Disable Query Monitor is not working by setting
enable_query_monitor=-1
in/etc/cmon.d/cmon_X.cnf.
- Galera: Force stop on the node does not prevent further auto-recovery jobs.
- Galera: Node recover job fails but is shown in green.
- Galera: Backup is not working for non-synced nodes in Galera Cluster. This allows mysqldump to be taken on non-synced nodes as xtrabackup/mariabackup tools prevent this.
- MariaDB: MariaDB 10.3/10.4 promote slave action fails.
- Repository Manager: Updated and added missing versions and removed some deprecated versions.
- Controller (behavior change):
- Backup Verification Server: Applies to MySQL based systems only (PostgreSQL coming soon). It is now possible to reuse an up and running Backup Verification Server (BVS). Thus, a BVS does not need to be shutdown before verifying the backup.
- Host Discovery: A new way to execute host discovery and logging to
/var/log/cmon_discovery*.log
.
Maintenance Release: March 4th, 2020
- Build:
- clustercontrol-1.7.5-6697
- Frontend (UI):
- Auth logging. Added TZ support. Use server’s TZ by default, but another TZ can be set in
/var/www/html/clustercontrol/boostrap.php.
- Auth logging. Added TZ support. Use server’s TZ by default, but another TZ can be set in
Maintenance Release: March 3rd, 2020
- Build:
- clustercontrol-controller-1.7.5-3735
- clustercontrol-1.7.5-6695
- Frontend (UI):
- Auth logging. Login/logouts and failed login attempts are stored in
/var/www/html/clustercontrol/app/tmp/logs/auth.log.
- Auth logging. Login/logouts and failed login attempts are stored in
- Controller:
- PostgreSQL: Fixed a bug in Database Growth.
Maintenance Release: March 1st, 2020
- Build:
- clustercontrol-controller-1.7.5-3730
- clustercontrol-1.7.5-6685
- Frontend (UI):
- Cloud Deployment Wizard: Updated to latest supported vendors versions.
- PostgreSQL: Fixed an issue showing
replay_location
in e.g Topology View.
- Controller:
- MongoDB: the wrong template used for MongoDB and Percona MongoDB 4.2.
- Query Monitor (mysql):
datadir
andslow_query_log_file
variables read too often. - TimescaleDB: Rebuild slave fails on installed but not registered TimescaleDB.
- MySQL/Galera: Upgrade MySQL/Galera packages in one batch instead of installing/upgrading them one-by-one.
- HAProxy: Include the latest HAProxy sample in the error-report.
- General:
staging_dir
fromcmon.cnf
is not respected. - Percona Server 8.0: Can’t deploy ProxySQL on a separate non-DB node in Percona 8.0.
Maintenance Release: February 9th, 2020
- Build:
- clustercontrol-controller-1.7.5-3679
- clustercontrol-1.7.5-6646
- Frontend (UI):
- Create Slave Cluster action not working immediately after deploying a cluster.
- MaxScale: Make MaxScale available for Keepalived.
- Load balancers: Added options to avoid disabling SELinux and firewall.
Cluster List: Sorting Clusters.
- Controller:
- ProxySQL: Fixed a bug deploying ProxySQL on a separate node in a Percona Server 8.0 Cluster.
- Prometheus/Dashboards: Fixed an issue DNS resolve so that the
mysqld_exporter
with the propertydb_exporter_use_nonlocal_address
, properly handles theskip_name_resolve
flag. - PostgreSQL: Fixed an issue when the controller always tried to connect to a ‘postgres’ DB even if no database was specified.
Maintenance Release: January 20th, 2020
- Build:
- clustercontrol-controller-1.7.5-3638
- clustercontrol-1.7.5-6619
- Frontend (UI):
- MongoDB: Added 4.0 and 4.2 versions for both mongodb.org and percona vendor in the UI.
- MySQL/Backup: Added ‘qpress’ compression option.
- Backups:
Netcat/socat
port is now specified in Global Settings. - Backups: Added check on Failover host so it cannot be set to the same value as the primary backup host.
- Cluster List: Fixed a sorting order issue.
- Controller:
- MySQL/Backup: Auto-install ‘qpress’ during restore/verify when required.
- MySQL/Replication A segfault when failover master could happen in MySQL 8.0.
- MySQL: Disable unsupported variables for 5.5.
- ProxySQL: Avoid executing SQL init commands on the connection (crashing bug in ProxySQL 1.4.10, fixed in ProxySQL 1.4.13).
- MongoDB 4.2: Fixed an issue Importing a Cluster due to newlines in the key file.
- MongoDB: Fixed a missing cloud badge on mongo clusters created in the cloud
- PostgreSQL: Improve the free disk space detection before rebuild slave.
- PostgreSQL: Create a cluster in the cloud failed because no PostgreSQL version was specified.
- PostgreSQL: Auto-rebuilding failed replication slaves now resorts to using the full node rebuild strategy instead of
pg_rewind
as it knows to fail in a number of scenarios. - Dashboards/Prometheus exporters: New configuration option:
db_exporter_use_nonlocal_address
.
Maintenance Release: January 7th, 2020
- Build:
- clustercontrol-controller-1.7.5-3616
- clustercontrol-1.7.5-6604
- Frontend (UI):
- Cluster Overview (MySQL based clusters): Fixed an issue with the Query Outliers which relied on deprecated code.
- Node Actions: The Stop Node action is always visible so it is always possible to stop a node.
- Controller:
- Notifications: Fixed an error with certain SMTP servers,
550 5.6.11 SMTPSEND.BareLinefeedsAreIllegal.
- PostgreSQL 9.7 with TimescaleDB: Add node fails on CentOS 7 and CentOS 8
- Notifications: Fixed an error with certain SMTP servers,
Initial Release: December 18th, 2019
- Build:
- clustercontrol-1.7.5-6599
- clustercontrol-controller-1.7.5-3601
- clustercontrol-notifications-1.7.5-201
- clustercontrol-ssh-1.7.5-88
- clustercontrol-cloud-1.7.5-225
In this release we are introducing cluster-wide maintenance mode, taking snapshots of the MySQL database status and processlist before a cluster failure, and support for new versions of PostgreSQL, MongoDB, CentOS, and Debian.
We have previously supported maintenance mode for one node at a time, however more often than not you want to put all cluster nodes into maintenance. Cluster-wide maintenance mode enables you to set a maintenance period for all the database nodes/clusters at once.
To assist in finding the root cause of failed database nodes we are now taking snapshots of the MySQL status and processes which will show you the state of the database node around the time where it failed. Cluster incidents can then be inspected in an operational report or from the s9s command-line tool.
Finally, we have worked on adding support for Centos 8, Debian 10, and deploying/importing MongoDB v4.2 and Percona MongoDB v4.0.
Feature Details
- Cluster Wide Maintenance
- Enable/disable cluster-wide maintenance mode with Cron based schedule.
- Enable/disable recurring jobs such as cluster or node recovery with automatic maintenance mode.
- MySQL Freeze Frame (BETA)
- Snapshot MySQL status before cluster failure.
- Snapshot MySQL process list before cluster failure (coming soon).
- Inspect cluster incidents in operational reports or from the s9s command-line tool.
- Updated Version Support
- Centos 8 and Debian 10 support.
- PostgreSQL 12 support.
- MongoDB 4.2 and Percona MongoDB v4.0 support.
- Misc
- Synchronize time range selection between the Overview and Node pages.
- Improvements to the status of the nodes update to be more accurate and with less delay.
- Enable/disable Cluster and Node recovery are now regular CMON jobs.
- Topology view with the cluster to cluster replication