1. Home
  2. Docs
  3. ClusterControl
  4. Changelogs
  5. Changes in v1.9.4

Changes in v1.9.4

Maintenance Release: November 30th, 2022

  • Build:
    • clustercontrol-controller-1.9.4-5956
  • Controller:
    • Address an issue with stopping failed master when >1 writeable node is found (CLUS-1527).
    • Address an issue to restore an encrypted backup from another cluster (CLUS-1612).
    • Address an improvement to stop a replica if the replication has a failure instead of keeping the replica running. A new CMON configuration parameter replication_stage_failure_stop_node=true|false can be used to set the behavior (CLUS-1532).
    • Address an issue with HAProxy when a PgBouncer node is removed from the cluster so that HAProxy redirects to an existing node in the cluster, i.e., not the previously removed PgBouncer node (CLUS-1614).
    • Address an issue where we could end up with multiple primaries after failover (PostgreSQL) (CLUS-1602).
    • Address an issue where CMON keeps overwriting the mysql-server_version variable for ProxySQL. It now sets it once at deployment (CLUS-1638).
    • Address an issue with recovery jobs stuck in a running state (CLUS-1681).
    • Address an issue with email alerts where the date header was off with 2h (CLUS-1682).

Maintenance Release: October 10th, 2022

  • Build:
    • clustercontrol-controller-1.9.4-5826
  • Controller:
    • Address an issue installing MySQL 5.7 on Debian 11 (CLUS-1604).
    • Address calculating DB growth where the number of tables to include was limited to 25. It’s now a configurable variable cmon_max_table_on_db_size_calc and set to 100 by default (CLUS-1561).
    • Address an issue where a deleted user’s sessions were automatically terminated (CCV2-542).
    • Address an issue using a local mirrored repository when adding new nodes. The vendor repo was used instead of the local repository (CLUS-1600).
    • Address and issue to include cluster_id in the cluster list result set (CCV2-516).
    • Address a segfault issue with MongoDB and DB growth calculation (CLUS-1516).
    • Address an improvement to pass controller-id and MySQL-DB to cmon at start (CLUS-1588).

Maintenance Release: September 27th, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5809
  • Controller:
    • Address an issue rebuilding a PostgreSQL replica node on Centos if the postgresql.conf is stored in the data directory. (CLUS-1569).
    • Address and issue where Invalid Security Configuration error alarm was sent even though cluster_ssl_enforce was set to false on MariaDB 10.2 (CLUS-1550).

Maintenance Release: August 31st, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5750
  • Controller:
    • Address an issue with unregistering HAProxy nodes (CLUS-1518)
    • Upgrade of the my.cnf template file for MariaDB Galera Cluster 10.6+

Maintenance Release: August 29th, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5742
  • Controller:
    • Address an issue to still show FQDN for MS SQL Server when short names are used to set up Availability Groups (CCV2-502).
    • Address an issue with Oracle MySQL and Percona 8.x to use caching_sha2_password as the default authentication plugin.
    • Address an issue with ProxySQL when importing MySQL users with caching_sha2_password. A ‘plaintext password’ is required otherwise the user will be skipped during import (CLUS-1342).

Maintenance Release: August 23rd, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5730
  • Controller:
    • Address an issue to set the correct backend mysql-server_version in ProxySQL (CLUS-1473).
    • Address an issue where root@localhost was still used (testing the connection) when adding a new replication slave (CLUS-1455).
    • Address an issue for MaxScale where static changes to /etc/maxscale.cnf had no effect new load_persisted_configs=false is now the default which makes changes to /etc/maxscale.cnf priority rather than dynamic changes which are saved to /var/lib/maxscale/maxscale.cnf.d (CLUS-1476).

Maintenance Release: August 17th, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5720
    • clustercontrol-1.9.4-8407
  • Controller:
    • Address an issue where HAProxy admin user and password in the advanced settings were not properly set (CLUS-1409).
    • Address an issue with PostgreSQL primary failover failure. All cluster nodes are now granted in all pg_hba.conf files at cluster deployment. (CLUS-1459).
    • Address an issue to drop cmon_host_log table since it no longer is in use.
    • Address an issue with MS SQL Server deployments using FQDN (CCV2-485, CCV2-473 Note: Additional improvements are in progress).
  • Frontend (UI):
    • Address an issue with MongoDB Replicaset deployment to support arbitrator nodes (CLUS-1458).
    • Address an issue to show a link to the T&C on the login page (CLUS-1464).

Maintenance Release: August 8th, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5695
    • clustercontrol-notifications-1.9.4-321
    • clustercontrol-1.9.4-8402
  • Controller:
    • Address an issue where ProxySQL deployment fails on MariaDB 10.6 (CLUS-1460).
    • Address an issue where DB user/account creation fails on MariaDB 10.6 (CLUS-1431, CLUS-1449, CLUS-1456).
    • Address an issue to include Controller/CMON and cluster name for notifications (CLUS-1416).
    • Address an issue removing keepalived nodes (CLUS-1462).
  • Frontend (UI):
    • Address an issue where the cluster load graph was stuck on loading (CLUS-1451).

Maintenance Release: July 29th, 2022

  • Build:
    • clustercontrol-controller- 1.9.4-5669
    • clustercontrol-1.9.4-8396
  • Controller:
    • Address an issue with backward compatibility on MongoDB where the role detection using the isMaster command was removed for older versions (CLUS-1446)
    • Address an issue where the deployment job for HAProxy with PostgreSQL got stuck (CLUS-1450)
    • Address an issue with ProxySQL synchronization where using a user other than ‘proxysql-monitor’ failed to migrate to the new instance (CLUS-1367)
    • Address further improvements to support ProxySQL users with sha2 passwords (CLUS-1342)
  • Frontend (UI):
    • Address an issue adding MongoDB arbiter which failed because the incorrect type was used (regular router/mongos) (CLUS-1444)
    • Address an issue with the PgBouncer deploy/import page not showing up properly (CLUS-1437)

Maintenance Release: July 22nd, 2022

  • Build:
    • clustercontrol-1.9.4-5638
  • Controller:
    • Address an issue where a stopped cluster changed state to failure instead of stopped (CLUS-1425)
    • Address an issue where the ProxySQL synchronization job failed but showed up as successful (CLUS-1426)
    • Address an issue where deleting a ProxySQL user only deleted it from the backend and not the frontend (CLUS-1428)
    • Address an issue when updating the ProxySQL frontend user entry which did not update the backend entry (CLUS-1430)
    • Address an issue with minor upgrades with MariaDB where packages to upgrade failed to match (CLUS-1362)

Initial Release: July 18th, 2022

  • Build:
    • clustercontrol-1.9.4-8386
    • clustercontrol-controller-1.9.4-5624
    • clustercontrol-notifications-1.9.4-312
    • clustercontrol-ssh-1.9.4-127
    • clustercontrol-cloud-1.9.4-353
    • clustercontrol-clud-1.9.4-353

In our third release, we now support scale out and in Redis Sentinel, MS SQL Server 2019, and Elasticsearch clusters.

A shared filesystem snapshot repository can be set up automatically now with Elasticsearch at deployment time and an AWS S3 compliant cloud snapshot repository can be used for backups instead of local storage.

We are continuing to add features to the new ClusterControl v2 web frontend and in this release, you are able to:

  • Deploy and Import
  • New database: MongoDB Sharded cluster
  • Load balancers: ProxySQL, MaxScale and Garbd
  • Automatic setup of a shared NFS filesystem for Elasticsearch at deployment
  • Cluster to cluster replication with MySQL and PostgreSQL clusters
  • AWS S3 compliant cloud snapshot repository for Elasticsearch
  • User profile and License management
  • ProxySQL and MaxScale Nodes pages

We still have some ways to go, however more of the core features to manage our full range of supported database technologies are becoming available in our new user interface.

Features

Scale In and Out

  • Add and remove node with:
    • Redis Sentinel
    • MS SQL Server 2019
    • Elasticsearch

Elasticsearch 7.x|8.x

  • Full-text search and analytics engine.
    • Deploy one node for test or development environments
    • Deploy three or more nodes for clustered deployments with master or data roles
    • Basic User Authentication with username and password
    • TLS/SSL API endpoint encryption
    • Backup Management with local storage repository
    • Scaling out or in master or data nodes
  • Current limitations:
    • No dashboards/performance charts
    • Only Ubuntu 20.04 and RedHat/Centos 8 are supported
    • Scheduled backups, configuration files management, and Upgrades are not supported at this time.

Miscellaneous

Various other improvements and fixes have also been made and available in the previous patch builds leading up to this release such as:

  • Customize time before failover for and use of systemctl instead of pg_ctl for PostgreSQL
    • replication_failover_wait_extra_sampling_rounds (default 0)
  • Customize mail subject for notification mails
    • email_subject_prefix
  • Start a node in bootstrap mode for Galera

See ClusterControl changelog for bug fixes and improvements prior to this release.

FAQs

Q: How can I start using Elasticsearch?

A: Please follow the instructions outlined in the Getting Started with ClusterControl v2 to install ClusterControl v2 and then use the Service Deployment wizard to deploy an Elasticsearch cluster.

Q: Can I separate the location of my Elasticsearch master and data nodes?

A: Yes, you can separate the master and data nodes to be on different hosts.

Q: What are the minimum requirements for a clustered Elasticsearch deployment?

A: Yes, you can separate the master and data nodes to be on different hosts.

You need:

  • At least three nodes in the master role
  • At least two nodes in the data role
  • At least two nodes of each role; master and data

Q: Can I rebuild an Elasticsearch node?

A: No there is no need, the Elasticsearch cluster automatically partitions the dataset across all available data nodes. You can remove and add a data node if you need to for example terminate and relaunch the VM the node runs on.

Q: Can I deploy other load balancers like ProxSQL with ClusterControl v2’s web application?

A: Not yet with this release, unfortunately. We are working on adding the remaining supported load balancers and connection pooling. Expect it to be available in an upcoming release of the ClusterControl v2’s user interface.

  • ProxySQL
  • MaxScale
  • Garbd
  • PgBouncer (coming soon)

ProxySQL has some limited functionality which will be available in upcoming releases. Currently, these node pages are ready:

  • Rules
  • Servers

Work In Progress (coming soon)

  • Monitor
  • Top Queries
  • Users
  • Variables
  • Schedules scripts
  • Node performance
  • Process list
Was this article helpful to you? Yes No