Table of Contents
Hosts
Lists of hosts being managed by ClusterControl for the specific cluster. This includes:
- ClusterControl node
- PostgreSQL nodes (standalone)
- PostgreSQL master nodes (replication)
- PostgreSQL slave nodes (replication)
To remove a host, just select the host and click on the Remove button.
We strongly recommend users to avoid removing nodes from this page if they still hold a role inside ClusterControl.
Configurations
Manages the configuration files of your database and HAProxy nodes. For pg_hba.conf
and within the postgresql.conf
files, some parameters contain comments that indicate # (change requires restart)
. Note that ClusterControl will neither perform the reload nor restart operation after modifying the configuration file. One has to schedule the server to reload/restart operation to load the changes into the server runtime. If you would like to create a user with automatic reload, use the Users Management instead.
ClusterControl does not store configuration changes history so there is no versioning at the moment. Only one version exists at one time. It imports the latest configuration files every 30 minutes and overwrites them in the CMON database. This limitation will be improved in the upcoming release where ClusterControl shall support configuration versioning with dynamic import interval.
Field | Description |
---|---|
Save |
|
Import |
|
Change/Set Parameter |
|
Starting from ClusterControl 1.9.7 (September 2023), ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. Note that the GUI v1 is considered a feature-freeze product with no future development. All new developments will be happening on ClusterControl GUI v2. See User Guide (GUI v2).
Base Template Files
All services configured by ClusterControl use a base configuration template available under /usr/share/cmon/templates
on the ClusterControl node. You can directly modify the file to suit your deployment policy however, this directory will be replaced on every package upgrade.
To make sure your custom configuration template files persist across upgrades, store the files under /etc/cmon/templates
a directory. When ClusterControl loads up the template file for deployment, files under /etc/cmon/templates
will always have higher priority over the files under /usr/share/cmon/templates
. If two files having identical names exist on both directories, the one located under /etc/cmon/templates
will be used.
The following are template files provided by ClusterControl related to PostgreSQL:
Filename | Description |
---|---|
postgreschk |
PostgreSQL health check script template for multi-master. |
postgreschk_rw_split |
PostgreSQL health check script template for read-write splitting. |
postgreschk_xinetd |
Xinetd configuration template for PostgreSQL health check. |
postgresql.service.override |
Systemd unit file template for PostgreSQL service. |
haproxy_rw_split.cfg |
HAProxy configuration template for read-write splitting. |
keepalived-1.2.7.conf |
Legacy Keepalived configuration file (pre 1.2.7). This is deprecated. |
keepalived.conf |
Keepalived configuration file. |
keepalived.init |
Keepalived init script, for the Build from Source installation option. |
Load Balancer
Deploys supported load balancers and virtual IP addresses for this cluster.
HAProxy
Installs and configures an HAProxy instance. ClusterControl will automatically install and configure HAProxy, install postgreschk_rw_split
script (to report the PostgreSQL healthiness) on each of the database nodes as part of xinetd service, and start the HAProxy service. Once the installation completes, PostgreSQL will listen on Listen Port (5433 for read-write and 5434 for read-only connections) on the configured node.
This feature is idempotent, you can execute it as many times as you want and it will always reinstall everything as configured.
Deploy HAProxy
Field | Description |
---|---|
Server Address |
|
Policy |
|
Listen Port (Read/Write) |
|
Install for read/write splitting (master-slave replication) |
|
Installation Settings
Field | Description |
---|---|
Overwrite Existing /usr/local/sbin/postgreschk_rw_split on targets |
|
Disable Firewall? |
|
Disable SELinux/AppArmor? |
|
Advanced Settings
Field | Description |
---|---|
Stats Socket |
|
Admin Port |
|
Admin User |
|
Admin Password |
|
Backend Name |
|
Timeout Server (seconds) |
|
Timeout Client (seconds) |
|
Max Connections Frontend |
|
Max Connections Backend/per instance |
|
xinetd allow connections from |
|
Server instances in the load balancer
Field | Description |
---|---|
Include |
|
Role |
|
Connection Address |
|
Import HAProxy
Field | Description |
---|---|
HAProxy Address |
|
cmdline |
|
Port |
|
Admin User |
|
Admin Password |
|
LB Name |
|
HAProxy Config |
|
Stats Socket |
|
You need to have an admin user/password set in the HAProxy configuration otherwise you will not see any HAProxy stats.
PgBouncer
PgBouncer is a lightweight connection pooler for PostgreSQL. It reduces PostgreSQL resource consumption (memory, backends, fork) and supports online restart or upgrade without dropping client connections. Using ClusterControl, you can manage PgBouncer on one or more nodes, manage multiple pools per node and support 3 pool modes:
- session (default): When a client connects, a server connection will be assigned to it for the whole duration the client stays connected. When the client disconnects, the server connection will be put back into the pool.
- transaction: A server connection is assigned to a client only during a transaction. When PgBouncer notices that the transaction is over, the server connection will be put back into the pool.
- statement: The server connection will be put back into the pool immediately after a query completes. Multi-statement transactions are disallowed in this mode as they would break.
Deploy PgBouncer
ClusterControl only supports deploying PgBouncer on the same host as the PostgreSQL host. When deploying a PgBouncer node, ClusterControl will deploy using the following default values:
- Command:
/usr/bin/pgbouncer /etc/pgbouncer/pgbouncer.ini
- Port: 6432
- Configuration file:
/etc/pgbouncer/pgbouncer.ini
- Logfile:
/var/log/pgbouncer/pgbouncer.log
- Auth file:
/etc/pgbouncer/userlist.txt
- Pool mode: session
Field | Description |
---|---|
PgBouncer Node 1 |
|
Listen Port |
|
Add PgBouncer Instance |
|
PgBouncer Admin User |
|
PgBouncer Admin Password |
|
Deploy PgBouncer |
|
After the PgBouncer installation finishes, the node will be listed under the Nodes page where you can manage the connection pools.
Import PgBouncer
Field | Description |
---|---|
PgBouncer Node 1 |
|
Listen Port |
|
Add PgBouncer Instance |
|
PgBouncer Admin User |
|
PgBouncer Admin Password |
|
Import PgBouncer |
|
After the PgBouncer import operation finishes, the node will be listed under the Nodes page where you can manage the connection pools.
Keepalived
Keepalived requires two HAProxy instances in order to provide virtual IP address failover. By default, this IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will be automatically failover to ‘Keepalived 2’ accordingly.
Deploy Keepalived
Field | Description |
---|---|
Select type of loadbalancer |
|
Keepalived 1 |
|
Keepalived 2 |
|
Virtual IP |
|
Network Interface |
|
Install Keepalived |
|
Import Keepalived
Field | Description |
---|---|
Keepalived 1 |
|
Add Keepalived Instance |
|
Remove Keepalived Instance |
|
Virtual IP |
|
Deploy Keepalived |
|
Users Management
Users
Shows a summary of PostgreSQL users and privileges for the cluster. All of the changes are automatically synced to all database nodes in the cluster.
You can filter the list by username, hostname, database, or table in the text box. Click on Edit to update the existing user or Drop User to remove the existing user. Click on Create New User to open the user creation wizard:
Field | Description |
---|---|
Username |
|
Password |
|
Hostname |
|
Privileges |
|
Add Statement |
|
Upgrades
Performs minor software upgrades for database and load balancer software, for example from PostgreSQL 12.3 to PostgreSQL 12.5 in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor.
For a master-slave replication setup (PostgreSQL Streaming Replication), ClusterControl will only perform the upgrade on the slaves. Once the upgrading job on the slaves successfully completed, you shall promote an upgraded slave as the new master and repeat the same upgrade process once more for the former master (which already demoted as a slave). To promote a slave, go to Nodes→ pick an upgraded slave → Promote Slave.
Database major version upgrade is not supported by ClusterControl. Major version upgrade has to be performed manually as it involves some risky operations like database package removal, configuration compatibility concern, connectors compatibility, etc.
Field | Description |
---|---|
Upgrade |
|
Check for New Packages |
|
Select Nodes to Upgrade |
|
Developer Studio
Provides functionality to create Advisors, Auto Tuners, or Mini Programs right within your web browser based on ClusterControl DSL. The DSL syntax is based on JavaScript, with extensions to provide access to ClusterControl’s internal data structures and functions. The DSL allows you to execute SQL statements, run shell commands/programs across all your cluster hosts, and retrieve results to be processed for advisors/alerts or any other actions. Developer Studio is a development environment to quickly create, edit, compile, run, test, debug, and schedule your JavaScript programs.
Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions, or even cluster-wide automation tasks based on the state of your servers or databases.
ClusterControl comes with a set of basic advisors that include rules and alerts on security settings, system checks (NUMA, Disk, CPU), queries and so on. The advisors are open source under MIT license, and publicly available at GitHub. Through the Developer Studio, it is easy to import new advisors as a JS bundle or export your own for others to try out.
Field | Description |
---|---|
New |
|
Import |
|
Export |
|
Advisors |
|
Save |
|
Move |
|
Remove |
|
Compile |
|
Compile and run |
|
Schedule Advisor |
|
For full documentation on ClusterControl Domain Specific Language, see ClusterControl DSL.
Tags
This feature is introduced in ClusterControl v1.8.2.
Use tags to allow filtering and searching for clusters. Each cluster can have zero or many tags to help keep the clusters organized. Note that special characters like spaces, tabs and dollar signs are not supported. The created tags can be filtered while looking up the clusters in the Database Cluster list page, by clicking on the magnifier glass icon on the top menu (next to the “Database Clusters” string).
To remove a tag, simply click on the
Tags created here can also be used with ClusterControl CLI using the --with-tags
or −−without-tags
flag. See s9s-cluster.