1. Home
  2. Docs
  3. ClusterControl
  4. User Guide (GUI)
  5. PostgreSQL
  6. Manage

Manage

Hosts

Lists of hosts being managed by ClusterControl for the specific cluster. This includes:

  • ClusterControl node
  • PostgreSQL nodes (standalone)
  • PostgreSQL master nodes (replication)
  • PostgreSQL slave nodes (replication)

To remove a host, just select the host and click on the Remove button.

Attention

We strongly recommend users to avoid removing nodes from this page if they still hold a role inside ClusterControl.

Configurations

Manages the configuration files of your database and HAProxy nodes. For pg_hba.conf and within the postgresql.conf files, some parameters contain comments that indicate # (change requires restart). Note that ClusterControl will neither perform the reload nor restart operation after modifying the configuration file. One has to schedule the server to reload/restart operation to load the changes into the server runtime. If you would like to create a user with automatic reload, use the Users Management instead.

Note

ClusterControl does not store configuration changes history so there is no versioning at the moment. Only one version exists at one time. It imports the latest configuration files every 30 minutes and overwrites them in the CMON database. This limitation will be improved in the upcoming release where ClusterControl shall support configuration versioning with dynamic import interval.

Field Description
Save
  • Save the changes that you have made and push them to the corresponding node.
Import
  • Re-import configuration if you have:
    • Performed local configuration changes directly on the configuration files.
    • Restarted the PostgreSQL servers/performed a rolling restart after a configuration change.
  • ClusterControl will trigger a job to fetch the latest modification from each PostgreSQL and/or HAProxy node.
Change/Set Parameter
  • ClusterControl will attempt to modify the configuration value if the parameter is valid in the respective PostgreSQL configuration file.
Attention

Starting from ClusterControl 1.9.7 (September 2023), ClusterControl GUI v2 is the default frontend graphical user interface (GUI) for ClusterControl. Note that the GUI v1 is considered a feature-freeze product with no future development. All new developments will be happening on ClusterControl GUI v2. See User Guide (GUI v2).

Base Template Files

All services configured by ClusterControl use a base configuration template available under /usr/share/cmon/templates on the ClusterControl node. You can directly modify the file to suit your deployment policy however, this directory will be replaced on every package upgrade.

To make sure your custom configuration template files persist across upgrades, store the files under /etc/cmon/templates a directory. When ClusterControl loads up the template file for deployment, files under /etc/cmon/templates will always have higher priority over the files under /usr/share/cmon/templates. If two files having identical names exist on both directories, the one located under /etc/cmon/templates will be used.

The following are template files provided by ClusterControl related to PostgreSQL:

Filename Description
postgreschk PostgreSQL health check script template for multi-master.
postgreschk_rw_split PostgreSQL health check script template for read-write splitting.
postgreschk_xinetd Xinetd configuration template for PostgreSQL health check.
postgresql.service.override Systemd unit file template for PostgreSQL service.
haproxy_rw_split.cfg HAProxy configuration template for read-write splitting.
keepalived-1.2.7.conf Legacy Keepalived configuration file (pre 1.2.7). This is deprecated.
keepalived.conf Keepalived configuration file.
keepalived.init Keepalived init script, for the Build from Source installation option.

Load Balancer

Deploys supported load balancers and virtual IP addresses for this cluster.

HAProxy

Installs and configures an HAProxy instance. ClusterControl will automatically install and configure HAProxy, install postgreschk_rw_split script (to report the PostgreSQL healthiness) on each of the database nodes as part of xinetd service, and start the HAProxy service. Once the installation completes, PostgreSQL will listen on Listen Port (5433 for read-write and 5434 for read-only connections) on the configured node.

This feature is idempotent, you can execute it as many times as you want and it will always reinstall everything as configured.

Deploy HAProxy

Field Description
Server Address
  • Select which host to add the load balancer. If the host is not provisioned in ClusterControl (see Hosts), type in the IP address. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
Policy
  • Choose one of these load balancing algorithms:
    • leastconn – The server with the lowest number of connections receives the connection.
    • roundrobin – Each server is used in turns, according to their weights.
    • source – The same client IP address will always reach the same server as long as no server goes down.
Listen Port (Read/Write)
  • Specify the HAProxy listening port. This will be used as the load-balanced PostgreSQL connection port for read/write connections.
Install for read/write splitting (master-slave replication)
  • Toggle on if you want the HAProxy to use another listener port for read-only. A new text box will appear right next to the Listen Port (Read/Write) text box.

Installation Settings

Field Description
Overwrite Existing /usr/local/sbin/postgreschk_rw_split on targets
  • Toggle on if you want to overwrite any existing PostgreSQL health check script on the load balancer node.
Disable Firewall?
  • Toggle on to disable the firewall (recommended). Otherwise, ClusterControl will not perform this action and the existing firewall rules (if exist) will remain active.
Disable SELinux/AppArmor?
  • Toggle on to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).

Advanced Settings

Field Description
Stats Socket
  • Specify the path to bind a UNIX socket for HAProxy statistics. See stats socket.
Admin Port
  • Port to listen to the HAProxy statistic page.
Admin User
  • Admin username to access HAProxy statistic page. See stats auth.
Admin Password
Backend Name
  • Name for the backend. No whitespace or tab allowed.
Timeout Server (seconds)
  • Sets the maximum inactivity time on the server-side. See timeout server.
Timeout Client (seconds)
  • Sets the maximum inactivity time on the client-side. See timeout client.
Max Connections Frontend
  • Sets the maximum per-process number of concurrent connections to the HAProxy instance. See maxconn
Max Connections Backend/per instance
  • Sets the maximum per-process number of concurrent connections per backend instance. See maxconn.
xinetd allow connections from
  • The specified subnet will be allowed to access the postgreschk_rw_split via xinetd service, which listens on port 9201 on each of the database nodes. To allow connections from all IP addresses, use the default value, 0.0.0.0/0.

Server instances in the load balancer

Field Description
Include
  • Select PostgreSQL servers in your cluster that will be included in the load balancing set
Role
  • Supported roles:
    • Active – The server is actively used in load balancing.
    • Backup – The server is only used in load balancing when all other non-backup servers are unavailable.
Connection Address
  • Pick the IP address where HAProxy should be listening to on the host.

Import HAProxy

Field Description
HAProxy Address
  • Select which host to add the load balancer. If the host has not been provisioned by ClusterControl (see Hosts), type in the IP address or hostname. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
cmdline
  • Specify the command line that ClusterControl should use to start the HAProxy service. You can verify this by using ps -ef | grep haproxy and retrieve the full command of how the HAProxy process started. Copy the full command line and paste it into the text field.
Port
  • Port to listen to HAProxy admin/statistic page (if enable).
Admin User
  • Admin username to access HAProxy statistic page. See stats auth.
Admin Password
LB Name
  • Name for the backend. No whitespace or tab allowed.
HAProxy Config
  • Location of HAProxy configuration file (haproxy.cfg) on the target node.
Stats Socket
  • Specify the path to bind a UNIX socket for HAProxy statistics. See stats socket.
  • Usually, HAProxy writes the socket file to /var/run/haproxy.socket . This is needed by ClusterControl to monitor HAProxy. This is usually defined in the haproxy.cfg file, and the line looks like this:
    stats socket /var/run/haproxy.socket user haproxy group haproxy mode 600 level
Note

You need to have an admin user/password set in the HAProxy configuration otherwise you will not see any HAProxy stats.

PgBouncer

PgBouncer is a lightweight connection pooler for PostgreSQL. It reduces PostgreSQL resource consumption (memory, backends, fork) and supports online restart or upgrade without dropping client connections. Using ClusterControl, you can manage PgBouncer on one or more nodes, manage multiple pools per node and support 3 pool modes:

  • session (default): When a client connects, a server connection will be assigned to it for the whole duration the client stays connected. When the client disconnects, the server connection will be put back into the pool.
  • transaction: A server connection is assigned to a client only during a transaction. When PgBouncer notices that the transaction is over, the server connection will be put back into the pool.
  • statement: The server connection will be put back into the pool immediately after a query completes. Multi-statement transactions are disallowed in this mode as they would break.

Deploy PgBouncer

ClusterControl only supports deploying PgBouncer on the same host as the PostgreSQL host. When deploying a PgBouncer node, ClusterControl will deploy using the following default values:

  • Command: /usr/bin/pgbouncer /etc/pgbouncer/pgbouncer.ini
  • Port: 6432
  • Configuration file: /etc/pgbouncer/pgbouncer.ini
  • Logfile: /var/log/pgbouncer/pgbouncer.log
  • Auth file: /etc/pgbouncer/userlist.txt
  • Pool mode: session
Field Description
PgBouncer Node 1
  • Select an existing PostgreSQL node from the dropdown.
Listen Port
  • Listening port for PgBouncer
Add PgBouncer Instance
  • Shows additional input field for secondary PgBouncer node.
  • You can a PgBouncer on every PostgreSQL node, up to the total number of nodes in the cluster.
PgBouncer Admin User
  • PgBouncer admin username. ClusterControl will create the admin user if specified.
  • If empty, ClusterControl will generate an admin user called “pgbadmin”.
PgBouncer Admin Password
  • Password for PgBouncer admin user.
Deploy PgBouncer
  • Starts the deployment of PgBouncer.

After the PgBouncer installation finishes, the node will be listed under the Nodes page where you can manage the connection pools.

Import PgBouncer

Field Description
PgBouncer Node 1
  • Select an existing PostgreSQL node from the dropdown.
Listen Port
  • Listening port for PgBouncer
Add PgBouncer Instance
  • Shows additional input field for secondary PgBouncer node. You can import PgBouncer on every PostgreSQL node, up to the total number of nodes in the cluster.
PgBouncer Admin User
  • PgBouncer admin username.
PgBouncer Admin Password
  • Password for PgBouncer admin user.
Import PgBouncer
  • Starts the PgBouncer import job.

After the PgBouncer import operation finishes, the node will be listed under the Nodes page where you can manage the connection pools.

Keepalived

Keepalived requires two HAProxy instances in order to provide virtual IP address failover. By default, this IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will be automatically failover to ‘Keepalived 2’ accordingly.

Deploy Keepalived

Field Description
Select type of loadbalancer
  • Only HAProxy is supported for PostgreSQL at the moment.
Keepalived 1
  • Select the primary Keepalived node (installed or imported using HAProxy).
Keepalived 2
  • Select the secondary Keepalived node (installed or imported using HAProxy).
Virtual IP
  • Assigns a virtual IP address. The IP address should not exist in any node in the cluster to avoid conflict.
Network Interface
  • Specify a network interface to bind the virtual IP address. This interface must able to communicate with other Keepalived instances and support IP protocol 112 (VRRP) and unicasting.
Install Keepalived
  • Starts installation of Keepalived.

Import Keepalived

Field Description
Keepalived 1
  • Specify the IP address or hostname of the primary Keepalived node.
Add Keepalived Instance
  • Shows additional input field for secondary Keepalived node.
Remove Keepalived Instance
  • Hides additional input field for secondary Keepalived node.
Virtual IP
  • Assigns a virtual IP address. The IP address should not exist in any node in the cluster to avoid conflict.
Deploy Keepalived
  • Starts the import of Keepalived job.

Users Management

Users

Shows a summary of PostgreSQL users and privileges for the cluster. All of the changes are automatically synced to all database nodes in the cluster.

You can filter the list by username, hostname, database, or table in the text box. Click on Edit to update the existing user or Drop User to remove the existing user. Click on Create New User to open the user creation wizard:

Field Description
Username
  • PostgreSQL username.
Password
  • Password for Username. The minimum requirement is 4 characters.
Hostname
  • Hostname or IP address range of the user or client. For the IP address range, use the IP address/subnet format, e.g, 192.168.100.0/24.
Privileges
  • Specify the privilege for this user. If the Privileges text box is active, it will list out all possible privileges of the server.
  • Specify the database or table name. It can be in *.*, {database_name}, {database_name}.* or {database_name}.{table_name} format.
Add Statement
  • Add another Privileges statement builder entry for this user.

Upgrades

Performs minor software upgrades for database and load balancer software, for example from PostgreSQL 12.3 to PostgreSQL 12.5 in a rolling upgrade fashion. ClusterControl will perform the software upgrade based on what is available on the package repository for the particular vendor.

For a master-slave replication setup (PostgreSQL Streaming Replication), ClusterControl will only perform the upgrade on the slaves. Once the upgrading job on the slaves successfully completed, you shall promote an upgraded slave as the new master and repeat the same upgrade process once more for the former master (which already demoted as a slave). To promote a slave, go to Nodes→ pick an upgraded slave → Promote Slave.

Attention

Database major version upgrade is not supported by ClusterControl. Major version upgrade has to be performed manually as it involves some risky operations like database package removal, configuration compatibility concern, connectors compatibility, etc.

Field  Description
Upgrade
  • Upgrades are performed online on one node at a time. The node will be stopped, then the software will be updated, and then the node will be started again. If a node fails to upgrade, the upgrade process is aborted and manual intervention is required to recover or reinstall the node.
  • If the database-related software is installed from the package repository, clicking on this will trigger an upgrade job using the corresponding package manager.
  • Upgrades should only be performed when it is as little traffic as possible on the cluster.
Check for New Packages
  • Triggers a job to check for any new versions of database-related packages. It is recommended to perform this operation before performing an actual upgrade.
  • You should see a list of packages under the Available Packages, and the bold lines are the ones that can be updated.
Select Nodes to Upgrade
  • Toggle all nodes that you want to upgrade.

Developer Studio

Provides functionality to create Advisors, Auto Tuners, or Mini Programs right within your web browser based on ClusterControl DSL. The DSL syntax is based on JavaScript, with extensions to provide access to ClusterControl’s internal data structures and functions. The DSL allows you to execute SQL statements, run shell commands/programs across all your cluster hosts, and retrieve results to be processed for advisors/alerts or any other actions. Developer Studio is a development environment to quickly create, edit, compile, run, test, debug, and schedule your JavaScript programs.

Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions, or even cluster-wide automation tasks based on the state of your servers or databases.

ClusterControl comes with a set of basic advisors that include rules and alerts on security settings, system checks (NUMA, Disk, CPU), queries and so on. The advisors are open source under MIT license, and publicly available at GitHub. Through the Developer Studio, it is easy to import new advisors as a JS bundle or export your own for others to try out.

Field Description
New
  • Name – Specify the file name including folders if you need them. E.g. shared/helpers/cmon.js will create all appropriate folders if they don’t exist yet.
  • File content:
    • Empty file – Create a new empty file.
    • Template – Create a new file containing skeleton code for monitoring.
    • Generic MySQL Template – Create a new file containing skeleton code for generic MySQL monitoring.
Import
Export
  • Exports the advisor’s directory to a  .tar.gz format. The exported file can be imported to Developer Studio through ClusterControl → Manage → Developer Studio → Import function.
Advisors
  • Opens the Advisor list page. See Advisors.
Save
  • Saves the file.
Move
  • Moves the file around between different subdirectories.
Remove
  • Removes the script.
Compile
  • Compiles the script.
Compile and run
  • Compile and run the script. The output appears under the Message, Graph, or Raw response tab underneath the editor.
  • The arrow next to the “Compile and Run” button allows us to change settings for a script and for example, pass some arguments to the main() function.
Schedule Advisor
  • Schedules the script as an advisor.

For full documentation on ClusterControl Domain Specific Language, see ClusterControl DSL.

Tags

Note

This feature is introduced in ClusterControl v1.8.2.

Use tags to allow filtering and searching for clusters. Each cluster can have zero or many tags to help keep the clusters organized. Note that special characters like spaces, tabs and dollar signs are not supported. The created tags can be filtered while looking up the clusters in the Database Cluster list page, by clicking on the magnifier glass icon on the top menu (next to the “Database Clusters” string).

To remove a tag, simply click on the x next to every created tag string.

Tags created here can also be used with ClusterControl CLI using the --with-tags or −−without-tags flag. See s9s-cluster.

Was this article helpful to you? Yes No