Skip to content

Create Load Balancer

ClusterControl simplifies the deployment of load balancers for database clusters by providing a guided workflow via its web UI and CLI. Load balancers are essential in distributing traffic across multiple database nodes, improving availability, fault tolerance, and performance.

With Create Load Balancer, you can:

  • Deploy supported load balancers in front of your database clusters.
  • Automatically configure health checks and failover mechanisms.
  • Integrate with Keepalived to provide VIP-based failover.
  • Manage multiple load balancer instances for redundancy.

ClusterControl supports deploying the following load balancer types:

  • ProxySQL
  • HAProxy
  • MariaDB MaxScale
  • Garbd
  • PgBouncer
  • Keepalived

Prerequisites

Before deploying a load balancer, ensure the following:

  • You have a database cluster deployed and managed by ClusterControl.
  • Passwordless SSH (SSH using key-based authentication) is configured from the ClusterControl node to load balancer nodes. See SSH Key-based Authentication.
  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.
  • The target node(s) are reachable from the ClusterControl server.

Support Matrix

Load Balancer Supported Databases
ProxySQL MySQL, MariaDB, Galera Cluster
HAProxy MySQL, MariaDB, Galera Cluster, PostgreSQL, TimescaleDB
MariaDB MaxScale MySQL, MariaDB, Galera Cluster
Garbd Galera Cluster
PgBouncer PostgreSQL, TimescaleDB
Keepalived For floating virtual IP address (VIP) setup

ProxySQL

This section is for ProxySQL nodes (MySQL/MariaDB replication, Galera Cluster). By default, ClusterControl deploys ProxySQL in read/write split mode – your read-only traffic will be sent to replicas while your writes will be sent to a writable primary by creating two host groups. In case of primary failure, ProxySQL will detect the new writable primary and route writes to it automatically without any user intervention. ProxySQL offers a wide range of features and capabilities that make it suitable for various scenarios, including improving the scalability, availability, performance, and security of MySQL database infrastructures. You can use an existing database server or another new host by specifying the hostname or IP address. With two or more ProxySQL nodes, you can then configure a virtual IP address service using Keepalived.

  1. Navigate to ClusterControl GUI → Select a cluster → Cluster Actions → Add new → Load balancer and choose Create balancer.

  2. Choose the load balancer service ProxySQL.

  3. Click Continue.

  4. You will be presented with a Create ProxySQL service deployment wizard. Specify:

    • Version: Choose a supported version.
    • Server address: Specify the hostname or IP address of the host. This host must be accessible via passwordless SSH from the ClusterControl node.
    • Admin port: ProxySQL administration port. The default is 6032.
    • Listening port: ProxySQL for MySQL load balancing. The default is 6033. This is where the applications and clients will connect after ProxySQL is activated.
  5. For ProxySQL Configuration, you can choose:

    • Disable firewall: Whether ClusterControl should disable firewall (iptables, ufw, firewalld) on the node during deployment. It is recomemnded to use the default setting (checked). If unchecked, this configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports prior to this deployment otherwise there is a risk of failure deployment. See Firewall and Security Groups.
    • Disable SELinux/AppArmor: Whether ClusterControl should disable SELinux (Red Hat-based operating system) or AppArmor (Debian-based operating system). It is recomemnded to use the default setting (checked). If unchecked, ensure you have set a proper policy for the database-related processes and all of their dependencies.
    • Import configuration: Deploys a new ProxySQL based on an existing ProxySQL instance. The source instance must be added first into ClusterControl. Once added, you can choose the source ProxySQL instance from a dropdown list.
    • Use native clustering: The ProxySQL Server will be created using native clustering and an entry will be created in the proxysql_server tab. It is recommended to enable this if you would like to have more than one ProxySQL node. Port 6032 must be reachable between all ProxySQL nodes.

    Then, click Continue to the next step.

  6. Under the Configuration section, specify the credentials that ClusterControl should use to create the load balancer:

    • Administration user: ProxySQL administration user name.
    • Administration password: Password for the administration user.
    • Monitor user: ProxySQL monitor user.
    • Monitor password: Password for monitor user.
    • Database user: Create or use an existing database user.
      • For existing user:
        • Database username: Choose the existing user name from the dropdown.
        • Database password: Password for DB User.
      • For new user:
        • Database username: The database user name.
        • Database password: Password for DB Users.
        • Database name: Database name in “database.table” format.
        • MySQL privilege(s): ClusterControl will load the privilege name along the keypress. Multiple privileges are possible.

    Note

    Administration requires port 6032 to be open in the firewall/security group.

  7. In the Server instances section, specify what nodes are going to be added to the load balancer, its Max replication lag, Max connection, and Weight. Also, choose if you want to use implicit transactions - default value is Off.

    • Max replication lag: How many seconds of replication lags should be allowed before marking the node as unhealthy. The default value is 10.
    • Max connection: Maximum connections are to be sent to the backend servers. It’s recommended to match or lower than the max_connections value of the backend servers.
    • Weight: This value is used to adjust the server’s weight relative to other servers. All servers will receive a load proportional to their weight relative to the sum of all weights. The higher the weight, the higher the priority.
    • Are you using implicit transactions?: Off – If you rely on SET AUTOCOMMIT=0 to create a transaction. On – If you explicitly use BEGIN or START TRANSACTION to create a transaction. Choose Off if you are unsure of this part.
  8. Click Continue to proceed to the Preview step. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the load balancer by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.

    Example output

    ProxySQL Deployment Preview

  9. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • Create a ProxySQL node:

    s9s cluster \
        --cluster-id=2 \
        --add-node \
        --nodes="proxysql://10.10.10.103?db_username=proxysqluser&db_password=proxysqlpass&db_database='*.*'&db_privs='ALL PRIVILEGES'" \
        --wait \
        --log
    

HAProxy

This section is for HAProxy nodes (MySQL/MariaDB replication, Galera Cluster, PostgreSQL, TimescaleDB). ClusterControl will automatically install and configure HAProxy, install mysqlcheck script (for MySQL health checks) on each of the database nodes as part of xinetd service, and start the HAProxy service. If you set up read/write splitting for primary-replica replication, there will be two listening ports configured (one for read-write and another one for read-only connections). You can use an existing database server or another new host by specifying the hostname or IPv4 address. With two or more HAProxy nodes, you can then configure a virtual IP address service using Keepalived.

  1. Navigate to ClusterControl GUI → Select a cluster → Cluster Actions → Add new → Load balancer and choose Create balancer.

  2. Choose the load balancer service HAProxy.

  3. Click Continue.

  4. You will be presented with a Create HAProxy service deployment wizard. Specify:

    • Server address: Select which host to add the load balancer. If the host is not provisioned by ClusterControl, type in the IP address or hostname. The required files will be installed on the new host. Note that ClusterControl will access the new host using passwordless SSH.
    • Policy: Choose one of these load-balancing algorithms:
      • leastconn: The server with the lowest number of connections receives the connection.
      • roundrobin: Each server is used in turns, according to their weights.
      • source: Same client IP address will always reach the same server as long as no server goes down.
    • Listen port (read/write): The listening port that ClusterControl shall configure during the deployment process to receive read/write traffic.
    • Listen port (read-only): The listening port that ClusterControl shall configure during the deployment process to receive read-only traffic.
    • Install for read/write splitting (master-slave replication): Toggled On if you want HAProxy to use another listener port for read-only. A new text box for Listen Port (Read Only) will appear right next to the Listen Port (Read/Write) text box. You can then specify the port for read-only database connection port.
    • Disable firewall: Whether ClusterControl should disable firewall (iptables, ufw, firewalld) on the node during deployment. It is recomemnded to use the default setting (checked). If unchecked, this configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports prior to this deployment otherwise there is a risk of failure deployment. See Firewall and Security Groups.
    • Disable SELinux/AppArmor: Whether ClusterControl should disable SELinux (Red Hat-based operating system) or AppArmor (Debian-based operating system). It is recomemnded to use the default setting (checked). If unchecked, ensure you have set a proper policy for the database-related processes and all of their dependencies.

    Then, click Continue to the next step.

  5. Under the Installation settings section, specify if you want to Overwrite existing health check script on the node. Enabled by default.

  6. In Advanced settings section, specify:

    • Stats socket: Specify the path to bind a UNIX socket for HAProxy statistics.
    • Admin port: Port to listen to the HAProxy statistic page.
    • Admin user: Admin username to access the HAProxy statistic page.
    • Admin password: Password for the admin user.
    • Backend name (RW): Name of the read-write backend to be created in HAProxy.
    • Backend name (RO): Name of the read-only backend to be created in HAProxy.
    • Timeout server (in seconds): Sets the maximum inactivity time on the server side.
    • Timeout client (in seconds): Sets the maximum inactivity time on the client side.
    • Max connections frontend: Sets the maximum per-process number of concurrent connections to the HAProxy instance.
    • Max connections backend (per instance): Sets the maximum per-process number of concurrent connections per backend instance.
    • xinetd allow connections from: The specified subnet will be allowed to access the health check script mysqlchk or mysqlchk_rw_split for read/write splitting (MySQL-based clusters) or postgreschk and postgreschk_rw_split (PostgreSQL-based clusters) as xinetd service, which listens on port 9200 on every database node. To allow connections from all IP addresses, use the default value, “0.0.0.0/0”.
  7. For the Server instances section, switch on the instances you want to be included in the load balancer. As Advanced options you can set Role (active – the server is actively used in load balancing, or backup – the server is only used in load balancing when all other non-backup servers are unavailable) and Connection address (choose the IP address where HAProxy should be listening on the host) for each database node.

  8. Click Continue to proceed to the Preview step. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the load balancer by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.

    Example output

    HAProxy Deployment Preview

  9. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • Create an HAProxy node:

    s9s cluster \
        --add-node \
        --cluster-id=1 \
        --nodes="haproxy://10.10.10.101" \
        --wait \
        --log
    

MariaDB MaxScale

This section is for MariaDB MaxScale nodes (MySQL/MariaDB replication, Galera Cluster). MaxScale is an intelligent proxy that allows the forwarding of database statements to one or more database servers using complex rules, a semantic understanding of the database statements, and the roles of the various servers within the backend cluster of databases.

You can deploy or import the existing MaxScale node as a load balancer and query router for your Galera Cluster and MySQL/MariaDB replication. For new deployment using ClusterControl, by default, it will create two production services:

  • RW – Implements read-write split access.
  • RR – Implements round-robin access.

With two MaxScale nodes, you can then configure a virtual IP address service using Keepalived.

Attention

ClusterControl performs MariaDB MaxScale installation via direct package download without using the MariaDB repository. The package download URL is kept inside /usr/share/cmon/templates/packages.conf on the ClusterControl server under the [maxscale] section. Occasionally, the provided URL will be outdated as MariaDB releases a new minor version, and removes the older minor version for a specific MaxScale major version. If that is the case, a manual modification is required to update the download link in this file. The updated download URL is available on the MariaDB MaxScale website.

  1. Navigate to ClusterControl GUI → Select a cluster → Cluster Actions → Add new → Load balancer and choose Create balancer.

  2. Choose the load balancer service MariaDB MaxScale.

  3. Click Continue.

  4. You will be presented with a Create MaxScale service deployment wizard. Specify:

    • Server address: The IP address of the node where MaxScale will be installed. ClusterControl must be able to perform passwordless SSH to this host.
    • Threads: How many threads MaxScale is allowed to use.
    • RR Port: Port for round-robin listener. The default is 4006.
    • RW Post: Port for the read-write split listener. The default is 4008.
    • Disable firewall: Whether ClusterControl should disable firewall (iptables, ufw, firewalld) on the node during deployment. It is recomemnded to use the default setting (checked). If unchecked, this configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports prior to this deployment otherwise there is a risk of failure deployment. See Firewall and Security Groups.
    • Disable SELinux/AppArmor: Whether ClusterControl should disable SELinux (Red Hat-based operating system) or AppArmor (Debian-based operating system). It is recomemnded to use the default setting (checked). If unchecked, ensure you have set a proper policy for the database-related processes and all of their dependencies.

    Then, click Continue to the next step.

  5. Under the Configuration section, specify the credentials that ClusterControl should use to create the load balancer:

    • MaxScale admin username: MaxScale admin username. The default is ‘admin’.
    • MaxScale admin password: MaxScale enforces that the admin password for admin user ‘admin’ is ‘mariadb’. If you want to change or use another password you must create another user. The ‘admin’ user can later be dropped.
    • MaxScale MySQL username: MariaDB/MySQL user that will be used by MaxScale to access and monitor the MariaDB/MySQL nodes.
    • MaxScale MySQL password: Password of MaxScale MySQL Username.
  6. In the Server instances section, switch on the instances you want to be included in the load balancer.

  7. Click Continue to proceed to the Preview step. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the load balancer by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.

    Example output

    MaxScale Deployment Preview

  8. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • Create a MaxScale node:

    s9s cluster \
        --cluster-id=2 \
        --add-node \
        --nodes="maxscale://10.10.10.104" \
        −−maxscale-mysql-user=maxscaleuser \
        −−maxscale-mysql-password=maxscalepass \
        --wait \
        --log
    

Garbd

Exclusive for Galera Cluster. Galera arbitrator daemon (garbd) can be installed to avoid network partitioning or split-brain scenarios.

  1. Navigate to ClusterControl GUI → Select a cluster → Cluster Actions → Add new → Load balancer and choose Create balancer.

  2. Choose the load balancer service Garbd.

  3. Click Continue.

  4. You will be presented with a Create garbd service deployment wizard. Specify:

    • Server address: The server address that ClusterControl will use to deploy the load balancer. ClusterControl must be able to perform passwordless SSH to this host.
    • CmdLine: Garbd command line to start garbd process on the target node.
    • Disable firewall: Whether ClusterControl should disable firewall (iptables, ufw, firewalld) on the node during deployment. It is recomemnded to use the default setting (checked). If unchecked, this configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports prior to this deployment otherwise there is a risk of failure deployment. See Firewall and Security Groups.
    • Disable SELinux/AppArmor: Whether ClusterControl should disable SELinux (Red Hat-based operating system) or AppArmor (Debian-based operating system). It is recomemnded to use the default setting (checked). If unchecked, ensure you have set a proper policy for the database-related processes and all of their dependencies.

    Then, click Continue to the next step.

  5. Click Continue to proceed to the Preview step. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the load balancer by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.

    Example output

    Garbd Deployment Preview

  6. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

Attention

ClusterControl does not support and allow garbd deployment on a server where ClusterControl is also running and hosted. There is a tendency that the existing MySQL packages will be removed which is managed by the software packaging tools.

PgBouncer

This section is for PgBouncer nodes (PostgreSQL, TimescaleDB). PgBouncer is a lightweight connection pooler for PostgreSQL. It reduces PostgreSQL resource consumption (memory, backends, fork) and supports online restart or upgrade without dropping client connections.

Using ClusterControl, you can manage PgBouncer on one or more nodes, manage multiple pools per node and support 3 pool modes:

  • session (default): When a client connects, a server connection will be assigned to it for the whole duration the client stays connected. When the client disconnects, the server connection will be put back into the pool.
  • transaction: A server connection is assigned to a client only during a transaction. When PgBouncer notices that the transaction is over, the server connection will be put back into the pool.
  • statement: The server connection will be put back into the pool immediately after a query completes. Multi-statement transactions are disallowed in this mode as they would break.
  1. Navigate to ClusterControl GUI → Select a cluster → Cluster Actions → Add new → Load balancer and choose Create balancer.

  2. Choose the load balancer service PgBouncer.

  3. Click Continue.

  4. You will be presented with a Create PgBouncer service deployment wizard. Specify:

    • PgBouncer admin user: PgBouncer admin username. ClusterControl will create the admin user if specified. If empty, ClusterControl will generate an admin user called “pgbadmin”.
    • PgBouncer admin password: The password for the PgBouncer admin user.
  5. Under the Add nodes section, specify the nodes where to deploy PgBouncer and the Listening port to be used (default is 6432).

  6. Click Continue to proceed to the Preview step. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the load balancer by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.

    Example output

    PgBouncer Deployment Preview

  7. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.

  • Create a ProxySQL node:

    s9s cluster \
        --cluster-id=4 \
        --add-node \
        --nodes="pgbouncer://10.10.10.107" \
        --wait \
        --log
    

Keepalived

This section is for Keepalived deployment. Keepalived requires two or more HAProxy, ProxySQL, or MariaDB MaxScale instances to provide virtual IP address failover. By default, the virtual IP address will be assigned to instance ‘Keepalived 1’. If the node goes down, the IP address will automatically fail over to ‘Keepalived 2’ accordingly. Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.

  1. Navigate to ClusterControl GUI → Select a cluster → Cluster Actions → Add new → Load balancer and choose Create balancer.

  2. Choose the load balancer service Keepalived.

  3. Click Continue.

  4. You will be presented with a Create Keepalived service deployment wizard:

    • Load balancer type: HAProxy, ProxySQL, MaxScale.
    • Keepalived: Choose the servers from the list or type IP address or hostname of the nodes where to install Keepalived.
    • Disable firewall: Whether ClusterControl should disable firewall (iptables, ufw, firewalld) on the node during deployment. It is recomemnded to use the default setting (checked). If unchecked, this configuration task will be skipped. If the firewall is enabled, ensure you have configured the necessary ports prior to this deployment otherwise there is a risk of failure deployment. See Firewall and Security Groups.
    • Disable SELinux/AppArmor: Whether ClusterControl should disable SELinux (Red Hat-based operating system) or AppArmor (Debian-based operating system). It is recomemnded to use the default setting (checked). If unchecked, ensure you have set a proper policy for the database-related processes and all of their dependencies.
  5. Under the Host configuration section, specify the virtual IP address and network interface where to add this virtual IP address. Specify a network interface to bind the virtual IP address of the load balancer host. This interface must be able to communicate with other Keepalived instances and support IP protocol 112 (VRRP) and unicast.

    Note

    Make sure the network interface matches the name of the network interface in the nodes.

  6. Click Continue to proceed to the Preview step. In this section, you can see the summary of your deployment and if everything is correct, you may proceed to deploy the load balancer by clicking Finish. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.

    Example output

    Keepalived Deployment Preview

  7. ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl GUI → Activity Center → Jobs.