Skip to content

Add Load Balancer

A database Load Balancer is a middleware service that stands between applications and databases. It distributes the workload across multiple database servers running behind it. The goals of having database load balancing are to provide a single database endpoint to applications to connect to, increase queries throughput, minimize latency and maximize resource utilization of the database servers.

Adding a load balancer is just like the same as adding a database node that are available and supported in the clusters. Although for this feature, not all clusters are supported and does not have balancers in some other clusters.

Support Matrix

The following database cluster types, vendors and topology supports adding a load balancer ad the load balancer that are available for support:

Database Vendor Topology Load Balancers Supported
MySQL Percona, Oracle Standalone, replication HAProxy, ProxySQL, Keepalived, MaxScale.
Galera Cluster MariaDB, Percona Galera certification-based replication HAProxy, ProxySQL, Keepalived, MaxScale.
PostgreSQL PostgreSQL, EnterpriseDB Standalone, streaming replication HAProxy, Keepalived, PgBouncer
TimescaleDB TimescaleDB Standalone, streaming replication HAProxy, Keepalived, PgBouncer

Prerequisites

When ading a load balancer node, it is very important to determine and prepare your host to have the requirements such as hardware, network, and user that is necessary to be accessed by ClusterControl and other database nodes.

There are prerequisites that need to be fulfilled prior to the deployment for all database clusters:

  • Make sure the target database nodes are running on a supported architecture platform and operating system. See Hardware and Operating System.
  • Passwordless SSH (SSH using key-based authentication) is configured from the ClusterControl node to all database nodes. See SSH Key-based Authentication.
  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.

MySQL Replication

The following vendors and versions are supported for adding load balancer node:

  • Oracle MySQL - 8.0.
  • Percona Server for MySQL - 8.0.
  • Percona Server for MySQL Pro - 8.0.
  • MariaDB Server - 10.4, 10.5, 10.6 (LTS), 10.11 (LTS) and 11.4 (LTS).

In MySQL Replication, whether its vendor is Oracle, MariaDB, or Percona Server, ClusterControl supports numerous types of load balancer. The supported balancers are the followiing:

  • ProxySQL - ProxySQL is database aware load-balancer which understands the MySQL protocol and can provide advanced traffic control for database transactions. Use sticky queries, set priorities for important queries, or even prevent queries from being executed.
  • HAProxy - HAProxy is very fast and reliable reverse-proxy offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
  • Keepalived - Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.
  • MariaDB MaxScale - MariaDB MaxScale is a database proxy that extends the high availability, scalability, and security of MariaDB Server while at the same time simplifying application development by decoupling it from underlying database infrastructure.

In a typical environment, specially for production environment, a setup shall choose one of the three supported load balancers in ClusterControl which are ProxySQL, HAProxy, and MariaDB Maxscale. Whereas, Keepalived is deployed on top of the load balancer in any of those three available load balancers ClusterControl supports.

Each of these load balancers have different configuration with different philosophy and mechanism. Make sure you better prepare and have the knowledge to satisfy your needs in order to have efficient management of your load balancer for your MySQL Replication cluster.

Default configuration

All supported load balancers by ClusterControl have different default configurations set. In the following sub-sections, these are described accordingly.

ProxySQL

Only 2.x version is currently supported by ClusterControl. ProxySQL is abundant of parameters and can be customized extendedly. However, the document focuses only on parameters that ClusterControl changed.

  • Admin port points to 6032 and Listening port points to 6033 unless changed. Make sure to open the ports for your firewall to established the connection successfully.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.
  • Uses proxysql-admin username for Administration user and proxysql-monitor for Monitor user respecitvely.
  • Max replication lag is set to 10 unless changed.
  • Max connection is set to 1 unless changed.
  • Weight is set to 1 unless changed.
  • Implicit Transactions is disabled. When disabled/off, this means that your database transactions rely on using BEGIN/START TRANSACTION and COMMIT/ROLLBACK.
  • Creates the proxysql-monitor user or the value set as the username in the MySQL Replication cluster primary node.
  • Uses hostgroup_id=10 for writer group server.
  • Uses hostgroup_id=20 for reader group servers.
  • Assigns the servers based on read-write ability on which hostgroup it belongs. This is only for database nodes that are registered in ClusterControl and to that target MySQL Replication cluster where load balancer is deployed
  • Creates the database user that was set (if any) during ProxySQL settings in the GUI configuration. This user shall be created as a user in ProxySQL for load balancing and shall also be created as a new user into your database servers.
  • Adds query rules to the ProxySQL mysql_query_rules table with match patterns ^SELECT .* FOR UPDATE with destination_hostgroup: 10, ^SELECT . with destination_hostgroup: 20, and .* with destination_hostgroup: 10

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

HAProxy

ClusterControl will rely on the version available in the source repository of your target node's operating system.

  • Uses port 3307 for read-write and port 3308 for read-only transactions.
  • Uses backend name for read-write haproxy_<ip_address>_3307_rw and haproxy_<ip_address>_3308_ro for read-only, which <ip_address> is the IP of the target node.
  • Default load balancing policy is leastconn.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.
  • Installs /usr/local/sbin/mysqlchk. If this is already installed, default commits to overwritting the file.
  • Sets up the administrative auth page.
  • Uses admin/admin user and password for the administrative auth page but can be changed in the GUI.
  • Uses stats/admin user and password for the stats page. You can only change this manually in the haproxy.cfg file.
  • Stats socket poins to /var/run/haproxy.socket.
  • HAProxy admin port is 9600.
  • Timeout server is set to 10800 seconds.
  • Timeout client is set to 10800 seconds.
  • Global max connections frontend is set to 8192.
  • Max connections backend (per instance) is set to 64. In case you are expecting huge client connections, increase this to 1024 or higher.
  • All database nodes defaults to check when configured in the listen directive.
  • Sets up stats and admin page with insecure-password.
  • Sets up and installs systemd script named cc-haproxy-mysqlchk for service and socket to listen on port 9200.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

Keepalived

ClusterControl will rely on the version available in the source repository of your target node's operating system.ClusterControl supports ProxySQL, HAProxy, and MaxScale type of load balancers for Keepalived to manage automatic failover, high availability, and redundancy. Other default configs are listed below:

  • The first node during the setup shall have the higher priority. Priority starts at 100 and increments with 1.
  • Tunes OS by enabling net.ipv4.ip_nonlocal_bind=1.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

MariaDB MaxScale

ClusterControl shall deploy a MaxScale version based on the template file /usr/share/cmon/templates/packages.conf found in ClusterControl controller host. If you need other version than what is used in the file, you can edit and replace the version in the file. Other than this, default actions shall be in the following:

  • ClusterControl shall add MariaDB MaxScale repository to install the package.
  • Max Scale threads default is set to 4.
  • Read only traffic port defaults to 4006.
  • Read-write traffc port defaults to 4008.
  • MaxScale user/password for admin defaults to admin/mariadb respectively
  • Disable Firewall and Disable SELinux/AppArmor are enabled.
  • Creates the monitor user with the following privileges SHOW DATABASES, REPLICATION CLIENT.

Note

If you want specific version to be installed for your MariaDB MaxScale, update the file to /usr/share/cmon/templates/packages.conf under [maxscale] section and set the requiredVersion parameter to your desired verson to install. See also Configuration Template for relative context.

Deployment steps

ProxySQL

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose ProxySQL

  5. First step is Where to install.

    1. Version is set to 2.x

    2. Server address. Fill in the IP address or hostname of your target node to have this ProxySQL deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    3. Admin port is set to 6032. Replace it if you want a unique port number assigned.

    4. Listening port is set to 6033. Replace it if you want a unique port number assigned.

    5. Under ProxySQL Configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

      3. Import configuration. Import and apply the ProxySQL configuration from an existing instance or from a backup.

      4. Use native clustering. The ProxySQL Server will be created using native clustering and an entry will be created in the proxysql_server tab.

  6. Click Continue.

  7. Second step is Configuration.

    1. Administrative user default is set to proxysql-admin. Replace the value if you need unique username for this field.

    2. Administrative password. Set your desired password for the Administrative user.

    3. Monitor user default is set to proxysql-monitor. Replace the value if you need unique username for this field.

    4. Monitor password. Set your desired password for the Monitor user.

    5. Under Database user (Optional), as the name states, this is optional. If you need to set a new user, click Create new user button.

      1. On the Database username, set the database user you want to create.

      2. On the Database password, set the database password you want to assign for your database user you created.

      3. On the Database name, set the databases that this user shall be privileged to open or read.

      4. On the MySQL privileg(s), set the privileges that the user can have using the MySQL privilege statements.

    6. Under Database user (Optional), If you need to choose Existing user, this is applicable if you instead assign the existing user and have this user assigned or created to your ProxySQL for laod balancing.

    7. On the Database username, choose your existing database username.

    8. On the Database password, the database password for your existing user. Make sure you have the password set correctly. Otherwise, the user can have trouble connecting to the target database nodes during load balancing.

  8. Click Continue.

  9. Third step is Server instances. Depending on how many servers you have in the cluster, parameters are set individually per server.

    1. Max replication lag is set to 10. If you need higher or lower, replace it with your desired value.

    2. Max connection is set to 100. If you need higher or lower, replace it with your desired value.

    3. Weight is set to 1. If you need higher or lower, replace it with your desired value. Make sure you understand how ProxySQL uses weight.

    4. Are you using implicit transactions?. Set to ON if you rely on SET autocommit=0 to create a transaction for you. Set to Off if you use BEGIN or START TRANSACTION to create a transaction for you.

  10. Click Continue to proceed the last step Preview.

  11. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  • Add a new ProxySQL load balancer of an existing clutser ID 97:

    s9s cluster \
        --cluster-id=97 \
        --add-node \
        --nodes="proxysql://192.168.40.101?db_username=proxysql-s9s&db_password=admin&db_database='*.*'&db_privs='ALL PRIVILEGES'" \
        --wait \
        --log
    
    • where --cluster-id=97 targets the cluster having the cluster ID of 97 (the MySQL Replication cluster for this example deployment)
    • sets up the node for ProxySQL deployment in this parameter --nodes="proxysql://192.168.40.101?db_username=proxysql-s9s&db_password=admin&db_database='*.*'&db_privs='ALL PRIVILEGES'"
    • where proxysql://192.168.40.101 is the target node where ProxySQL shall be installed
    • where db_username=proxysql-s9s sets the proxysql-s9s which shall be created to your database nodes as a new user (this is not the ProxySQL admin user) using the password set in db_password=admin i.e. admin is the password for this example.
    • where db_database='*.*' sets coverage of databases for this user
    • where the db_privs='ALL PRIVILEGES' sets the privilege to your MySQL database for this user.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a ProxySQL node is deployed and added in the cluster, ClusterControl will deploy Prometheus exporter/agents for ProxySQL performance data. For ProxySQL management, go to ClusterControl GUI → Nodes → choose the ProxySQL node → ProxySQL.

HAProxy

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose HAProxy

  5. First step is Where to install.

    1. Server address. Fill in the IP address or hostname of your target node to have this HAProxy deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    2. Policy is set to leastconn. If you click this drop-down button, it will allow you to replace from available options roundrobin and source.

    3. Listen port (read/write) is set to 3307. Replace it if you want a unique port number assigned.

    4. Listen port (read-only) is set to 3308. Replace it if you want a unique port number assigned.

    5. Install for read/write splitting (master-slave replication) is set to On by default. Turn it Off if you don't need it.

    6. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Installation Settings.

    1. Overwrite existing /usr/local/sbin/mysqlchk on target. This shall overwrite, if any, existing file found in /usr/local/sbin/mysqlchk. Make sure you back up if this file exist to avoid any problems of changes you might made from the original file.
  8. Second step is Advanced settings.

    1. Stats socket default is set to /var/run/haproxy.socket. Replace the value if you need to point to another destination or filename.

    2. Admin port. Replace it if you want a unique port number assigned for your administrative auth page when accessed via HTTP.

    3. Admin user default is set to admin. Replace the value if you need unique username for this field.

    4. Admin password default is set to admin. Set your desired password for the Admin user.

    5. Backend name (RW) default is set to haproxy_3307_rw. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    6. Backend name (RO) default is set to haproxy_3308_ro. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    7. Timeout server (in seconds) default is set to 10800. If you replace this, timeout when connecting to backend servers, that is your database nodes, shall reflect based on this value.

    8. Timeout client (in seconds) default is set to 10800. If you replace this, timeout when connecting from client, that is your application or users, to HAProxy server shall reflect based on this value.

    9. Max connections frontend default is set to 8192. Replace the value if you need either lower or higher than this default value.

    10. Max connections backend (per instance) default is set to 64. Replace the value if you need either lower or higher than this default value. In case you are expecting huge client connections, increase this to 1024 or higher.

    11. xinetd allow connections from default is set to 0.0.0.0/0 to allow all connections from using xinetd. However, you can just ignore this or set its default value since ClusterControl uses systemd to manage the tcp-check which uses /usr/local/sbin/mysqlchk script.

  9. Click Continue.

  10. Third step is Server instances. Depending on how many servers you have in the cluster, parameters are set individually per server.

    1. Server instances in the load balancer shows all the list of identified database node in this MySQL Replication cluster. Each of the nodes are set to Off.

      1. When a server is set to On, it will show you the Advanced options allowing you to set the Role field to Active or Backup.

      2. When a server is set to On, it will also allow you to specify the Connection address to use if the target node has multiple physical device address.

  11. Click Continue to proceed the last step Preview.

  12. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  • Add a new HAProxy load balancer:

    s9s cluster \
        --cluster-id=97 \
        --add-node \
        --nodes="haproxy://192.168.40.101?backend_name_ro=haproxy_3308_ro&backend_name_rw=haproxy_3307_rw&lb_user=myuser&lb_password=password&rw_splitting=true" \
        --wait \
        --log
    
    • where --cluster-id=97 targets the cluster having the cluster ID of 97 (the MySQL Replication cluster for this example deployment)
    • sets up the node for HAProxy deployment in this parameter --nodes="-nodes="haproxy://192.168.40.101?backend_name_ro=haproxy_3308_ro&.....
    • where haproxy://192.168.40.101 is the target node where HAProxy shall be installed
    • where backend_name_ro=haproxy_3308_ro&backend_name_rw=haproxy_3307_rw are the backend read-only and backend read-write names are set
    • where lb_user=myuser&lb_password=password sets the username and password for administrative auth page.
    • where the rw_splitting=true tells ClusterControl to set read-write splitting that is enable 3307 and 3308 ports for read-wreite and read-only listening port directive.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a HAProxy node is deployed and added in the cluster, ClusterControl will deploy HAProxy Exporter for Prometheus. You can get HAProxy monitoring state by going to ClusterControl GUI → Dashboards and choose HAProxy Overview in the dropdown. To manage the HAProxy service, go to ClusterControl GUI → Nodes → choose the HAProxy Nodes → HAProxy and it will open HAProxy stats page with possibility to enable or disable a backend server.

Keepalived

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose Keepalived

  5. First step is Configuration.

    1. Choose in the Load balancer type. Choose your target load balancer type and make sure it has to be more than 1 (one) to avoid the job from failing

    2. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Host Configuration.

    1. Choose in the Virtual IP. Fill in the IP address or hostname of your virtual IP that Keepalived will use for managing the VRRP. Make sure to hit enter in your keyboard to add the entry.

    2. Network Interface is your physical device ide where you want the VIP shall be assigned and where this physical device ID contains the IP or hostname you used for intercommunicating with your database nodes and ClusterControl networking.

  8. Click Continue to proceed the last step Preview.

  9. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  10. Click Finish button to deploy the job.

  • For Keepalived, it is mandatory to add two Keepalived nodes (two load balancer nodes):

    s9s cluster \
        --cluster-id=99 \
        --add-node \
        --nodes="keepalived://192.168.40.60;keepalived://192.168.40.61" \
        --virtual-ip="192.168.40.231" \
        --eth=enp0s8 \
        --wait \
        --log
    
    • where --cluster-id=99 targets the cluster having the cluster ID of 99 (the MySQL Replication cluster for this example deployment)
    • sets up the node for Keepalived deployment in this parameter --nodes="keepalived://192.168.40.60;keepalived://192.168.40.61"
    • where keepalived://192.168.40.60;keepalived://192.168.40.61 is the target nodes where Keepalived shall be installed. Make sure it should be more than one node.
    • where --virtual-ip="192.168.40.231" specifies the virtual IP to be set. In this example, it's 192.168.40.231.
    • where -eth=enp0s8 sets the Ethernet device ID points to enp0s8 in this example. The device ID enp0s8 is where the virtual IP address will be assigned to.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

MariaDB MaxScale

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose MariaDB MaxScale

  5. First step is Where to install.

    1. Server address. Fill in the IP address or hostname of your target node to have the MaxScale deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    2. Under Configure Maxscale,

      1. Threads is set to default value of 4. This is required so make sure you don't ommit this or leave the field empty.

      2. RR port (Port for Round Robin Listener) is set to default 4006. Replace it if you want a unique port number assigned. Make sure you don't ommit this or leave the field empty since it is a required field.

      3. RW port (Port for read/write split Listener) is set to default 4008. Replace it if you want a unique port number assigned. Make sure you don't ommit this or leave the field empty since it is a required field.

    3. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

    4. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Second step is Configuration.

    1. Set the MaxScale admin username field to your desired username other than the default admin.

    2. MaxScale admin password cannot be modified. It is set to its default value mariadb.

    3. Set the MaxScale MySQL username field to your desired monitoring username and this username shall be added as new user to your database.

    4. Set the MaxScale MySQL password field to your desired monitoring password for your monitoring username. Make sure not to left empty on this field.

  7. Click Continue.

  8. Third step is Server instances.

    1. The Server instances in the load balancer section will post all the database nodes in the cluster. By default, it is set to Off. Make sure to set it to On for nodes that you want to participate in the load balancer.
  9. Click Continue to proceed the last step Preview.

  10. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  11. Click Finish button to deploy the job.

  • To add a MaxScale load balancer:

    s9s cluster \
        --cluster-id=9 \
        --add-node \
        --nodes="maxscale://192.168.40.41?maxscale_admin_user=admin&maxscale_admin_password=mariadb&maxscale_mysql_user=maxscale_s9s_user&maxscale_mysql_password=maxscale_s9s_P@55w0rd&maxscale_rr_port=4006&maxscale_rw_port=4008&maxscale_threads=4" \
        --wait \
        --log
    
    • where --cluster-id=9 targets the cluster having the cluster ID of 9 (the MySQL Replication cluster for this example deployment),
    • --add-node parameter to set the type of action to do,
    • sets up the node for MaxScale deployment in this parameter -nodes="maxscale://192.168.40.41?maxscale_admin_user=admin&maxscale_admin_password...",
    • where maxscale://192.168.40.41 is the target node where MaxScale shall be installed. Only one node shall be installed at a time,
    • where maxscale_admin_user=admin specifies the username for the admin,
    • where maxscale_admin_password=mariadb specifies the password for the admin,
    • where maxscale_mysql_user=maxscale_s9s_user specifies the username for the monitor user to be added to the MySQL Replication database,
    • where maxscale_mysql_password=maxscale_s9s_P@55w0rd is the password for the monitor user,
    • where maxscale_rr_port=4006 sets the port number of the read-only router port,
    • where maxscale_rw_port=4008 sets the port number of the read-only router port,
    • where maxscale_threads=4 sets the number of MaxScale threads,
    • where the --wait and --log as the name suggest to print the job logs while waiting.

MySQL Galera

The following vendors and versions are supported for adding load balancer node:

  • Percona XtraDB Cluster - 8.0.
  • MariaDB Galera - 10.4, 10.5, 10.6 (LTS), 10.11 (LTS) and 11.4 (LTS).

In MySQL Galera, whether its vendor is MariaDB, or Percona XtraDB Cluster, ClusterControl supports numerous types of load balancer. The supported balancers are the followiing:

  • ProxySQL - ProxySQL is database aware load-balancer which understands the MySQL protocol and can provide advanced traffic control for database transactions. Use sticky queries, set priorities for important queries, or even prevent queries from being executed.
  • HAProxy - HAProxy is very fast and reliable reverse-proxy offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
  • Keepalived - Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.
  • MariaDB MaxScale - MariaDB MaxScale is a database proxy that extends the high availability, scalability, and security of MariaDB Server while at the same time simplifying application development by decoupling it from underlying database infrastructure.

In a typical environment, specially for production environment, a setup shall choose one of the three supported load balancers in ClusterControl which are ProxySQL, HAProxy, and MariaDB Maxscale. Whereas, Keepalived is deployed on top of the load balancer in any of those three available load balancers ClusterControl supports.

Each of these load balancers have different configuration with different philosophy and mechanism. Make sure you better prepare and have the knowledge to satisfy your needs in order to have efficient management of your load balancer for your MySQL Galera cluster.

Default configuration

All supported load balancers by ClusterControl have different default configurations set. In the following sub-sections, these are described accordingly.

ProxySQL

Only 2.x version is currently supported by ClusterControl. ProxySQL has abundant of parameters and can be customized extendedly. However, the document focuses only on parameters that ClusterControl changed.

  • Admin port points to 6032 and Listening port points to 6033 unless changed. Make sure to open the ports for your firewall to established the connection successfully.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.
  • Uses proxysql-admin username for Administration user and proxysql-monitor for Monitor user respecitvely.
  • Max replication lag is set to 10 unless changed.
  • Max connection is set to 1 unless changed.
  • Weight is set to 1 unless changed.
  • Implicit Transactions is disabled. When disabled/off, this means that your database transactions rely on using BEGIN/START TRANSACTION and COMMIT/ROLLBACK.
  • creates the proxysql-monitor user or the value set as the username in the MySQL Replication cluster primary node.
  • Uses hostgroup_id=10 for writer server.
  • Uses hostgroup_id=20 for reader servers.
  • Assigns the servers based on read-write ability on which hostgroup it belongs. This is only for database nodes that are registered in ClusterControl and to that target MySQL Replication cluster where load balancer is deployed.
  • Creates the database user that was set (if any) during ProxySQL settings in the GUI configuration. This user shall be created as a user in ProxySQL for load balancing and shall also be created as a new user into your database servers.
  • Adds query rules to the ProxySQL mysql_query_rules table with match patterns ^SELECT .* FOR UPDATE with destination_hostgroup: 10, ^SELECT . with destination_hostgroup: 20, and .* with destination_hostgroup: 10.

HAProxy

ClusterControl will rely on the version available in the source repository of your target node's operating system.

  • Uses port 3307 for read-write transactions.
  • Uses backend name for read-write haproxy_<ip_address>_3307_rw, which <ip_address> is the address of the target node.
  • Default policy is leastconn.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.
  • Installs /usr/local/sbin/mysqlchk. If this is already installed, default commits to overwritting the file.
  • Sets up the administrative auth page.
  • Uses admin/admin user and password for the administrative auth page but can be changed in the GUI.
  • Uses states/admin user and password for the stats page. You can only change this manually in the haproxy.cfg file.
  • Stats socket poins to /var/run/haproxy.socket.
  • Default admin port is 9600.
  • Timeout server is set to 10800 seconds.
  • Timeout client is set to 10800 seconds.
  • Global max connections frontend is set to 8192.
  • Max connections backend (per instance) is set to 64. In case you are expecting huge client connections, increase this to 1024 or higher.
  • All database nodes defaults to check when configured in the listen directive.
  • Sets up stats and admin page with insecure-password.
  • Sets up and installs systemd script named cc-haproxy-mysqlchk for service and socket to listen on port 9200.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

Keepalived

ClusterControl will rely on the version available in the source repository of your target node's operating system. ClusterControl supports ProxySQL, HAProxy, and MaxScale type of load balancers for Keepalived to manage automatic failover, high availability, and redundancy. Other default configs are listed below:

  • The first node during the setup shall have the higher priority. Priority starts at 100 and increments with 1.
  • Tunes OS by enabling net.ipv4.ip_nonlocal_bind=1.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

MariaDB MaxScale

ClusterControl shall deploy a MaxScale version based on the template file /usr/share/cmon/templates/packages.conf found in ClusterControl controller host. If you need other version than what is used in the file, you can edit and replace the version in the file. Other than this, default actions shall be in the following:

  • ClusterControl shall add MariaDB MaxScale repository to install the package.
  • Max Scale threads default is set to 4.
  • read only traffic port defaults to 4006.
  • read-write traffc port defaults to 4008.
  • MaxScale user/password for admin defaults to admin/mariadb respectively.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.
  • creates the monitor user with the following privileges SHOW DATABASES,REPLICATION CLIENT.

Note

If you want specific version to be installed for your MariaDB MaxScale, update the file to /usr/share/cmon/templates/packages.conf under [maxscale] section and set the requiredVersion parameter to your desired verson to install. See also Configuration Template for relative context.

Deployment steps

ProxySQL

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose ProxySQL

  5. First step is Where to install.

    1. Version is set to 2.x

    2. Server address. Fill in the IP address or hostname of your target node to have this ProxySQL deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    3. Admin port is set to 6032. Replace it if you want a unique port number assigned.

    4. Listening port is set to 6033. Replace it if you want a unique port number assigned.

    5. Under ProxySQL Configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

      3. Import configuration. Import and apply the ProxySQL configuration from an existing instance or from a backup.

      4. Use native clustering. The ProxySQL Server will be created using native clustering and an entry will be created in the proxysql_server tab.

  6. Click Continue.

  7. Second step is Configuration.

    1. Administrative user default is set to proxysql-admin. Replace the value if you need unique username for this field.

    2. Administrative password. Set your desired password for the Administrative user.

    3. Monitor user default is set to proxysql-monitor. Replace the value if you need unique username for this field.

    4. Monitor password. Set your desired password for the Monitor user.

    5. Under Database user (Optional), as the name states, this is optional. If you need to set a new user, click Create new user button.

      1. On the Database username, set the database user you want to create.

      2. On the Database password, set the database password you want to assign for your database user you created.

      3. On the Database name, set the databases that this user shall be privileged to open or read.

      4. On the MySQL privileg(s), set the privileges that the user can have using the MySQL privilege statements.

    6. Under Database user (Optional), If you need to choose Existing user, this is applicable if you instead assign the existing user and have this user assigned or created to your ProxySQL for laod balancing.

    7. On the Database username, choose your existing database username.

    8. On the Database password, the database password for your existing user. Make sure you have the password set correctly. Otherwise, the user can have trouble connecting to the target database nodes during load balancing.

  8. Click Continue.

  9. Third step is Server instances. Depending on how many servers you have in the cluster, parameters are set individually per server.

    1. Max replication lag is set to 10. If you need higher or lower, replace it with your desired value.

    2. Max connection is set to 100. If you need higher or lower, replace it with your desired value.

    3. Weight is set to 1. If you need higher or lower, replace it with your desired value. Make sure you understand how ProxySQL uses weight.

    4. Are you using implicit transactions?. Set to ON if you rely on SET autocommit=0 to create a transaction for you. Set to Off if you use BEGIN or START TRANSACTION to create a transaction for you.

  10. Click Continue to proceed the last step Preview.

  11. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  • Add a ProxySQL load balancer for a Galera cluster:

    s9s cluster \
        --cluster-id=98 \
        --add-node \
        --nodes="proxysql://192.168.40.60?db_username=proxysql-s9s&db_password=admin&db_database='*.*'&db_privs='ALL PRIVILEGES'" \
        --wait \
        --log
    
    • where --cluster-id=98 targets the cluster having the cluster ID of 98 (the Galera cluster for this example deployment),
    • sets up the node for ProxySQL deployment in this parameter --nodes="proxysql://192.168.40.60?db_username=proxysql-s9s&db_password=admin&db_database='*.*'&db_privs='ALL PRIVILEGES'",
    • where proxysql://192.168.40.60 is the target node where ProxySQL shall be installed,
    • where db_username=proxysql-s9s sets the proxysql-s9s which shall be created to your database nodes as a new user (this is not the ProxySQL admin user) using the password set in db_password=admin i.e. admin is the password for this example,
    • where db_database='*.*' sets coverage of databases for this user,
    • where the db_privs='ALL PRIVILEGES' sets the privilege to your MySQL database for this user,
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a ProxySQL node is deployed and added in the cluster, ClusterControl will deploy Prometheus exporter/agents for ProxySQL performance data. For ProxySQL management, go to ClusterControl GUI → Nodes → choose the ProxySQL node → ProxySQL.

HAProxy

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose HAProxy

  5. First step is Where to install.

    1. Server address. Fill in the IP address or hostname of your target node to have this HAProxy deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    2. Policy is set to leastconn. If you click this drop-down button, it will allow you to replace from available options roundrobin and source.

    3. Listen port (read/write) is set to 3307. Replace it if you want a unique port number assigned.

    4. Install for read/write splitting (master-slave replication) is set to On by default. Turn it Off if you don't need it.

    5. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Installation Settings.

    1. Overwrite existing /usr/local/sbin/mysqlchk on target. This shall overwrite, if any, existing file found in /usr/local/sbin/mysqlchk. Make sure you back up if this file exist to avoid any problems of changes you might made from the original file.
  8. Second step is Advanced settings.

    1. Stats socket default is set to /var/run/haproxy.socket. Replace the value if you need to point to another destination or filename.

    2. Admin port. Replace it if you want a unique port number assigned for your administrative auth page when accessed via HTTP.

    3. Admin user default is set to admin. Replace the value if you need unique username for this field.

    4. Admin password default is set to admin. Set your desired password for the Admin user.

    5. Backend name (RW) default is set to haproxy_3307_rw. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    6. Backend name (RO) default is set to haproxy_3308_ro. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    7. Timeout server (in seconds) default is set to 10800. If you replace this, timeout when connecting to backend servers, that is your database nodes, shall reflect based on this value.

    8. Timeout client (in seconds) default is set to 10800. If you replace this, timeout when connecting from client, that is your application or users, to HAProxy server shall reflect based on this value.

    9. Max connections frontend default is set to 8192. Replace the value if you need either lower or higher than this default value.

    10. Max connections backend (per instance) default is set to 64. Replace the value if you need either lower or higher than this default value. In case you are expecting huge client connections, increase this to 1024 or higher.

    11. xinetd allow connections from default is set to 0.0.0.0/0 to allow all connections from using xinetd. However, you can just ignore this or set its default value since ClusterControl uses systemd to manage the tcp-check which uses /usr/local/sbin/mysqlchk script.

  9. Click Continue.

  10. Third step is Server instances. Depending on how many servers you have in the cluster, parameters are set individually per server.

    1. Server instances in the load balancer shows all the list of identified database node in this MySQL Replication cluster. Each of the nodes are set to Off.

      1. When a server is set to On, it will show you the Advanced options allowing you to set the Role field to Active or Backup.

      2. When a server is set to On, it will also allow you to specify the Connection address to use if the target node has multiple physical device address.

  11. Click Continue to proceed the last step Preview.

  12. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  • Add a HAProxy load balancer:

    s9s cluster \
        --cluster-id=98 \
        --add-node \
        --nodes="haproxy://192.168.40.60?backend_name_rw=haproxy_3307_rw&lb_user=myuser&lb_password=password&rw_splitting=true" \
        --wait \
        --log
    
    • where --cluster-id=98 targets the cluster having the cluster ID of 98 (the MySQL Galera cluster for this example deployment)
    • sets up the node for HAProxy deployment in this parameter --nodes="-nodes="haproxy://192.168.40.101?backend_name_ro=haproxy_3308_ro&.....
    • where haproxy://192.168.40.60 is the target node where HAProxy shall be installed
    • where backend_name_rw=haproxy_3307_rw are the backend read-only and backend read-write names are set
    • where lb_user=myuser&lb_password=password sets the username and password for administrative auth page.
    • where the rw_splitting=true tells ClusterControl to set read-write splitting that is enable on port 3307.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a HAProxy node is deployed and added in the cluster, ClusterControl will deploy HAProxy Exporter for Prometheus. You can get HAProxy monitoring state by going to ClusterControl GUI → Dashboards and choose HAProxy Overview in the dropdown. To manage the HAProxy service, go to ClusterControl GUI → Nodes → choose the HAProxy Nodes → HAProxy and it will open HAProxy stats page with possibility to enable or disable a backend server.

Keepalived

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose Keepalived

  5. First step is Configuration.

    1. Choose in the Load balancer type. Choose your target load balancer type and make sure it has to be more than 1 (one) to avoid the job from failing

    2. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Host Configuration.

    1. Choose in the Virtual IP. Fill in the IP address or hostname of your virtual IP that Keepalived will use for managing the VRRP. Make sure to hit enter in your keyboard to add the entry.

    2. Network Interface is your physical device ide where you want the VIP shall be assigned and where this physical device ID contains the IP or hostname you used for intercommunicating with your database nodes and ClusterControl networking.

  8. Click Continue to proceed the last step Preview.

  9. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  10. Click Finish button to deploy the job.

  • Add a pair of Keepalived hosts (it is mandatory to have more than one load balancer node):

    s9s cluster \
        --cluster-id=98 \
        --add-node \
        --nodes="keepalived://192.168.40.60;keepalived://192.168.40.61" \
        --virtual-ip="192.168.40.231" \
        --eth=enp0s8 \
        --wait \
        --log
    
    • where --cluster-id=98 targets the cluster having the cluster ID of 98 (the MySQL Galera cluster for this example deployment)
    • sets up the node for Keepalived deployment in this parameter --nodes="keepalived://192.168.40.60;keepalived://192.168.40.61"
    • where keepalived://192.168.40.60;keepalived://192.168.40.61 is the target node swhere Keepalived shall be installed. Make sure it should be more than 1 (one) node.
    • where --virtual-ip="192.168.40.231" specifies the virtual IP to be set. In this example, it's 192.168.40.231.
    • where -eth=enp0s8 sets the ethernet device ID points to enp0s8 in this example. The device ID enp0s8 is where the virtual IP address will be assigned to.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

MariaDB MaxScale

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose MariaDB MaxScale

  5. First step is Where to install.

    1. Server address. Fill in the IP address or hostname of your target node to have the MaxScale deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    2. Under Configure Maxscale,

      1. Threads is set to default value of 4. This is required so make sure you don't ommit this or leave the field empty.

      2. RR port (Port for Round Robin Listener) is set to default 4006. Replace it if you want a unique port number assigned. Make sure you don't ommit this or leave the field empty since it is a required field.

      3. RW port (Port for read/write split Listener) is set to default 4008. Replace it if you want a unique port number assigned. Make sure you don't ommit this or leave the field empty since it is a required field.

    3. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

    4. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Second step is Configuration.

    1. Set the MaxScale admin username field to your desired username other than the default admin.

    2. MaxScale admin password cannot be modified. It is set to its default value mariadb.

    3. Set the MaxScale MySQL username field to your desired monitoring username and this username shall be added as new user to your database.

    4. Set the MaxScale MySQL password field to your desired monitoring password for your monitoring username. Make sure not to left empty on this field.

  7. Click Continue.

  8. Third step is Server instances.

    1. The Server instances in the load balancer section will post all the database nodes in the cluster. By default, it is set to Off. Make sure to set it to On for nodes that you want to participate in the load balancer.
  9. Click Continue to proceed the last step Preview.

  10. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  11. Click Finish button to deploy the job.

  • Add a MaxScale load balancer:

    s9s cluster \
        --cluster-id=98 \
        --add-node \
        --nodes="maxscale://192.168.40.60?maxscale_admin_user=admin&maxscale_admin_password=mariadb&maxscale_mysql_user=maxscale_s9s_user&maxscale_mysql_password=maxscale_s9s_P@55w0rd&maxscale_rr_port=4006&maxscale_rw_port=4008&maxscale_threads=4" \
        --wait \
        --log
    
    • where --cluster-id=98 targets the cluster having the cluster ID of 98 (the MySQL Galera cluster for this example deployment),
    • --add-node parameter to set the type of action to do,
    • sets up the node for MaxScale deployment in this parameter -nodes="maxscale://192.168.40.60?maxscale_admin_user=admin&maxscale_admin_password...",
    • where maxscale://192.168.40.60 is the target node where MaxScale shall be installed. Only one node shall be installed at a time,
    • where maxscale_admin_user=admin specifies the username for the admin,
    • where maxscale_admin_password=mariadb specifies the password for the admin,
    • where maxscale_mysql_user=maxscale_s9s_user specifies the username for the monitor user to be added to the MySQL database,
    • where maxscale_mysql_password=maxscale_s9s_P@55w0rd is the password for the monitor user,
    • where maxscale_rr_port=4006 sets the port number of the read-only router port,
    • where maxscale_rw_port=4008 sets the port number of the read-only router port,
    • where maxscale_threads=4 sets the number of MaxScale threads,
    • where the --wait and --log as the name suggest to print the job logs while waiting.

PostgreSQL

ClusterControl supports load balancer for PostgreSQL of the following types:

  • HAProxy - HAProxy is very fast and reliable reverse-proxy offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
  • Keepalived - Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.
  • PgBouncer - Is an open-source, lightweight, single-binary connection pooler for PostgreSQL. It can pool connections to one or more databases (on possibly different servers) and serve clients over TCP and Unix domain sockets. PgBouncer maintains a pool of connections for each unique user, database pair.

The following versions of PostgreSQL are supported for deploying the supported load balancers:

Default configuration

All supported load balancers by ClusterControl have different default configurations set. In the following sub-sections, these are described accordingly.

HAProxy

For a PostgreSQL cluster, ClusterControl supports load balancing your PostgreSQL cluster or your PgBouncer nodes.

For a PgBouncer, if you have more than one setup node, ClusterControl provides you a straightforward UX/UI for you to setup through the GUI. Same as with setting your HAProxy for load balancing your PostgreSQL Cluster, setup through the GUI is straightforward.

ClusterControl also will rely on the version available in the source repository of your target node's operating system.

  • Uses port 5433 for read-write transactions.
  • Uses port 5434 for read-only transactions.
  • Uses backend name for read-write haproxy_<ip_address>_3307_rw and haproxy_<ip_address>_3308_ro for read-only, which <ip_address> is the IP of the target node.
  • Default load balancing policy is leastconn.
  • installs /usr/local/sbin/postgreschk_rw_split. If this is already installed, default commits to overwritting the file
  • sets up the administrative auth page.
  • Uses admin/admin user and password for the administrative auth page but can be changed in the GUI.
  • Uses states/admin user and password for the stats page. You can only change this manually in the haproxy.cfg file.
  • stats socket poins to /var/run/haproxy.socket
  • HAProxy admin port uses 9600.
  • timeout server is set to 10800 seconds.
  • timeout client is set to 10800 seconds.
  • max connections frontend is set to 8192.
  • max connections backend (per instance) is set to 64. In case you are expecting huge client connections, increase this to 1024 or higher.
  • all database nodes defaults to check when configured in the listen directive.
  • sets up stats and admin page with insecure-password.
  • sets up and installs systemd script named cc-haproxy-postgreschk for service and socket to listen on port 9201.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

Keepalived

ClusterControl will rely on the version available in the source repository of your target node's operating system. ClusterControl supports only HAProxy for load balancing in a PostgreSQL cluster to manage automatic failover, high availability, and redundancy. Other default configs are listed below:

  • The first node during the setup shall have the higher priority. Priority starts at 100 and increments with 1.
  • Tunes OS by enabling net.ipv4.ip_nonlocal_bind=1.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

PgBouncer

ClusterControl supports only deployment of PgBouncer on a registered PostgreSQL cluster node. Let say you have 3 (three) nodes registered in your PostgreSQL cluster in ClusterControl, you can only deploy PgBouncer on any of these registered nodes. Once deployed, ClusterControl will handle setting up PgBouncer such as creating the user and setting up the authentication. Below are some default values we instill during deployment:

  • Log rotation is set to 10MiB. If threshold is reached, it shall rename the log and compress it, then rotate a new file.
  • Parameter pool_mode is set to session. It can be overridden by pool's own specific configuration.
  • Parameter server_reset_query is set to DISCARD ALL.
  • The default admin user is pgbadmin and password shall be read from the cluster's configuration file /etc/cmon.d/cmon_<cluster_id>.cnf.

Deployment steps

HAProxy

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose HAProxy

  5. First step is Where to install.

    1. Server address. Fill in the IP address or hostname of your target node to have this HAProxy deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    2. Policy is set to leastconn. If you click this drop-down button, it will allow you to replace from available options roundrobin and source.

    3. Listen port (read/write) is set to 5433. Replace it if you want a unique port number assigned.

    4. Listen port (read-only) is set to 5434. Replace it if you want a unique port number assigned.

    5. Install for read/write splitting (master-slave replication) is set to On by default. Turn it Off if you don't need it.

    6. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Installation Settings.

    1. Overwrite existing /usr/local/sbin/postgreschk_rw_split on targets. This shall overwrite, if any, existing file found in /usr/local/sbin/postgreschk_rw_split. Make sure you back up if this file exist to avoid any problems of changes you might made from the original file.
  8. Second step is Advanced settings.

    1. Stats socket default is set to /var/run/haproxy.socket. Replace the value if you need to point to another destination or filename.

    2. Admin port. Replace it if you want a unique port number assigned for your administrative auth page when accessed via HTTP.

    3. Admin user default is set to admin. Replace the value if you need unique username for this field.

    4. Admin password default is set to admin. Set your desired password for the Admin user.

    5. Backend name (RW) default is set to haproxy_3307_rw. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    6. Backend name (RO) default is set to haproxy_3308_ro. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    7. Timeout server (in seconds) default is set to 10800. If you replace this, timeout when connecting to backend servers, that is your database nodes, shall reflect based on this value.

    8. Timeout client (in seconds) default is set to 10800. If you replace this, timeout when connecting from client, that is your application or users, to HAProxy server shall reflect based on this value.

    9. Max connections frontend default is set to 8192. Replace the value if you need either lower or higher than this default value.

    10. Max connections backend (per instance) default is set to 64. Replace the value if you need either lower or higher than this default value. In case you are expecting huge client connections, increase this to 1024 or higher.

    11. xinetd allow connections from default is set to 0.0.0.0/0 to allow all connections from using xinetd. However, you can just ignore this or set its default value since ClusterControl uses systemd to manage the tcp-check which uses /usr/local/sbin/mysqlchk script.

  9. Click Continue.

  10. Third step is Server instances. Depending on how many servers you have in the cluster, parameters are set individually per server.

    1. Server instances in the load balancer shows all the list of identified database node in this MySQL Replication cluster. Each of the nodes are set to Off.

      1. When a server is set to On, it will show you the Advanced options allowing you to set the Role field to Active or Backup.

      2. When a server is set to On, it will also allow you to specify the Connection address to use if the target node has multiple physical device address.

  11. Click Continue to proceed the last step Preview.

  12. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  • Add a HAProxy load balancer:

    s9s cluster \
        --cluster-id=108 \
        --add-node \
        --nodes="haproxy://192.168.40.42?lb_user=myuser&lb_password=password&rw_splitting=true" \
        --wait \
        --log
    
    • where --cluster-id=108 targets the cluster having the cluster ID of 108 (the PostgreSQL cluster for this example deployment)
    • sets up the node for HAProxy deployment in this parameter --nodes="haproxy://192.168.40.42?lb_user=myuser&lb_password=password&rw_splitting=true"
    • where haproxy://192.168.40.42 is the target node where HAProxy shall be installed
    • where lb_user=myuser&lb_password=password sets the username and password for administrative auth page.
    • where the rw_splitting=true tells ClusterControl to set read-write splitting that is enable on port 5433.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a HAProxy node is deployed and added in the cluster, ClusterControl will deploy HAProxy Exporter for Prometheus. You can get HAProxy monitoring state by going to ClusterControl GUI → Dashboards and choose HAProxy Overview in the dropdown. To manage the HAProxy service, go to ClusterControl GUI → Nodes → choose the HAProxy Nodes → HAProxy and it will open HAProxy stats page with possibility to enable or disable a backend server.

Keepalived

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose Keepalived

  5. First step is Configuration.

    1. Choose in the Load balancer type. Choose your target load balancer type and make sure it has to be more than 1 (one) to avoid the job from failing

    2. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Host Configuration.

    1. Choose in the Virtual IP. Fill in the IP address or hostname of your virtual IP that Keepalived will use for managing the VRRP. Make sure to hit enter in your keyboard to add the entry.

    2. Network Interface is your physical device ide where you want the VIP shall be assigned and where this physical device ID contains the IP or hostname you used for intercommunicating with your database nodes and ClusterControl networking.

  8. Click Continue to proceed the last step Preview.

  9. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  10. Click Finish button to deploy the job.

  • Add two Keepalived nodes (it is mandatory to have more than one load balancer):

    s9s cluster \
        --cluster-id=108 \
        --add-node \
        --nodes="keepalived://192.168.40.43;keepalived://192.168.40.42" \
        --virtual-ip="192.168.40.231" \
        --eth=eth1 \
        --wait \
        --log
    
    • where --cluster-id=108 targets the cluster having the cluster ID of 108 (the PostgreSQL cluster for this example deployment)
    • sets up the node for Keepalived deployment in this parameter --nodes="keepalived://192.168.40.43;keepalived://192.168.40.42"
    • where keepalived://192.168.40.43;keepalived://192.168.40.42 is the target node swhere Keepalived shall be installed. Make sure it should be more than 1 (one) node.
    • where --virtual-ip="192.168.40.231" specifies the virtual IP to be set. In this example, it's 192.168.40.231.
    • where -eth=ETH1 sets the ethernet device ID points to enp0s8 in this example. The device ID enp0s8 is where the virtual IP address will be assigned to.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

PgBouncer

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose PgBouncer

  5. First step is Authentication.

    1. PgBouncer admin user is your admin user. Fill in your desired admin user here.

    2. PgBouncer admin password is your admin password. Fill in your desired admin password here.

  6. Second step is Add nodes.

    1. PgBouncer node is the IP address/hostname of your target node. Choose from your existing node in your PostgreSQL cluster.

    2. Listen port defaults to 6432. Replace it if you want a unique port number assigned.

  7. Click Continue.

  8. Click Continue to proceed the last step Preview.

  9. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  10. Click Finish button to deploy the job.

  • Add a PgBouncer node:

    s9s cluster \
        --cluster-id=108 \
        --add-node \
        --nodes="PgBouncer://192.168.40.42?service_name=pgbouncer_stage_env" \
        --wait \
        --log
    
    • where --cluster-id=108 targets the cluster having the cluster ID of 108 (the PostgreSQL cluster for this example deployment)
    • --add-node parameter to set the type of action to do
    • sets up the node for PostgreSQL deployment in this parameter PgBouncer://192.168.40.42?service_name=pgbouncer_stage_env"
    • where PgBouncer://192.168.40.420 is the target node where PgBouncer shall be installed. Only one node shall be installed at a time.
    • where service_name=pgbouncer_stage_env specifies the service that shall be used in systemd. It shall also be the basis when creating the config file pgbouncer.ini given that its path shall use /etc/pgbouncer_stage_env/pgbouncer.ini in this example.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a PgBouncer node is deployed and added in the cluster, ClusterControl will deploy PgBouncer Exporter for Prometheus. You can also view this by going to Dashboards tab → Load Balancer and choose PgBouncer Overview in your PostgreSQL cluster dashboard.

TimescaleDB

ClusterControl supports load balancer for TimescaleDB of the following types:

  • HAProxy - HAProxy is very fast and reliable reverse-proxy offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
  • Keepalived - Keepalived uses the IP Virtual Server (IPVS) kernel module to provide transport layer (Layer 4) load balancing, redirecting requests for network-based services to individual members of a server cluster.
  • PgBouncer - Is an open-source, lightweight, single-binary connection pooler for PostgreSQL/TimescaleDB. It can pool connections to one or more databases (on possibly different servers) and serve clients over TCP and Unix domain sockets. PgBouncer maintains a pool of connections for each unique user, database pair.

The following versions of TimescaleDB are supported for deploying the supported load balancers:

  • TimescaleDB - 12, 13, 14, 15, and 16.

Default configuration

All supported load balancers by ClusterControl have different default configurations set. In the following sub-sections, these are described accordingly.

HAProxy

For a TimescaleDB cluster, ClusterControl supports load balancing your TimescaleDB nodes or your PgBouncer nodes.

For a PgBouncer, if you have more than one node, ClusterControl provides you a straightforward wizard for you to setup through the GUI. Same as with setting your HAProxy for load balancing your TimescaleDB Cluster, setup through the GUI is straightforward.

ClusterControl also will rely on the version available in the source repository of your target node's operating system.

  • Uses port 5433 for read-write transactions.
  • Uses port 5434 for read-only transactions.
  • Uses backend name for read-write haproxy_<ip_address>_3307_rw and haproxy_<ip_address>_3308_ro for read-only, which <ip_address> is the IP of the target node.
  • Default load balancing policy is leastconn.
  • Installs /usr/local/sbin/postgreschk_rw_split. If this is already installed, default commits to overwritting the file
  • sets up the administrative auth page.
  • Uses admin/admin user and password for the administrative auth page but can be changed in the GUI.
  • Uses stats/admin user and password for the stats page. You can only change this manually in the haproxy.cfg file.
  • Stats socket poins to /var/run/haproxy.socket.
  • Default admin port uses 9600.
  • Timeout server is set to 10800 seconds.
  • Timeout client is set to 10800 seconds.
  • Global max connections frontend is set to 8192.
  • Max connections backend (per instance) is set to 64. In case you are expecting huge client connections, increase this to 1024 or higher.
  • All database nodes defaults to check when configured in the listen directive.
  • Sets up stats and admin page with insecure-password.
  • Sets up and installs systemd script named cc-haproxy-postgreschk for service and socket to listen on port 9201.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

Keepalived

ClusterControl will rely on the version available in the source repository of your target node's operating system. ClusterControl supports only HAProxy for load balancing in a TimescaleDB cluster to manage automatic failover, high availability, and redundancy. Other default configs are listed below:

  • The first node during the setup shall have the higher priority. Priority starts at 100 and increments with 1.
  • Tunes OS by enabling net.ipv4.ip_nonlocal_bind=1.
  • Disable Firewall and Disable SELinux/AppArmor are enabled.

Note

If you would like to customize the above configurations, modify the template base file to suit your needs before proceeding to the deployment. See Configuration Template for details.

PgBouncer

ClusterControl supports only deployment of PgBouncer on a registered TimescaleDB cluster node. Let say you have 3 (three) nodes registered in your TimescaleDB cluster in ClusterControl, you can only deploy PgBouncer on any of these registered nodes. Once deployed, ClusterControl will handle setting up PgBouncer such as creating the user and setting up the authentication. Below are some default values we instill during deployment:

  • Log rotation is set to 10MiB. If threshold is reached, it shall rename the log and compress it, then rotate a new file.
  • Parameter pool_mode is set to session. It can be overridden by pool's own specific configuration.
  • Parameter server_reset_query is set to DISCARD ALL.
  • The default admin user is pgbadmin and password shall be read from the cluster's configuration file /etc/cmon.d/cmon_<cluster_id>.cnf.

Deployment steps

HAProxy

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose HAProxy

  5. First step is Where to install.

    1. Server address. Fill in the IP address or hostname of your target node to have this HAProxy deployment, then hit enter or click the + button. If you want to replace the IP address or hostname to another one, just fill-in the field and hit enter or click the + button.

    2. Policy is set to leastconn. If you click this drop-down button, it will allow you to replace from available options roundrobin and source.

    3. Listen port (read/write) is set to 5433. Replace it if you want a unique port number assigned.

    4. Listen port (read-only) is set to 5434. Replace it if you want a unique port number assigned.

    5. Install for read/write splitting (master-slave replication) is set to On by default. Turn it Off if you don't need it.

    6. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Installation Settings.

    1. Overwrite existing /usr/local/sbin/postgreschk_rw_split on targets. This shall overwrite, if any, existing file found in /usr/local/sbin/postgreschk_rw_split. Make sure you back up if this file exist to avoid any problems of changes you might made from the original file.
  8. Second step is Advanced settings.

    1. Stats socket default is set to /var/run/haproxy.socket. Replace the value if you need to point to another destination or filename.

    2. Admin port. Replace it if you want a unique port number assigned for your administrative auth page when accessed via HTTP.

    3. Admin user default is set to admin. Replace the value if you need unique username for this field.

    4. Admin password default is set to admin. Set your desired password for the Admin user.

    5. Backend name (RW) default is set to haproxy_3307_rw. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    6. Backend name (RO) default is set to haproxy_3308_ro. Change it to your desired backend name. This shall reflect in your haproxy.cfg.

    7. Timeout server (in seconds) default is set to 10800. If you replace this, timeout when connecting to backend servers, that is your database nodes, shall reflect based on this value.

    8. Timeout client (in seconds) default is set to 10800. If you replace this, timeout when connecting from client, that is your application or users, to HAProxy server shall reflect based on this value.

    9. Max connections frontend default is set to 8192. Replace the value if you need either lower or higher than this default value.

    10. Max connections backend (per instance) default is set to 64. Replace the value if you need either lower or higher than this default value. In case you are expecting huge client connections, increase this to 1024 or higher.

    11. xinetd allow connections from default is set to 0.0.0.0/0 to allow all connections from using xinetd. However, you can just ignore this or set its default value since ClusterControl uses systemd to manage the tcp-check which uses /usr/local/sbin/mysqlchk script.

  9. Click Continue.

  10. Third step is Server instances. Depending on how many servers you have in the cluster, parameters are set individually per server.

    1. Server instances in the load balancer shows all the list of identified database node in this MySQL Replication cluster. Each of the nodes are set to Off.

      1. When a server is set to On, it will show you the Advanced options allowing you to set the Role field to Active or Backup.

      2. When a server is set to On, it will also allow you to specify the Connection address to use if the target node has multiple physical device address.

  11. Click Continue to proceed the last step Preview.

  12. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  • Add a HAProxy load balancer:

    s9s cluster \
        --cluster-id=108 \
        --add-node \
        --nodes="haproxy://192.168.40.42?lb_user=myuser&lb_password=password&rw_splitting=true" \
        --wait \
        --log
    
    • where --cluster-id=108 targets the cluster having the cluster ID of 108 (the TimescaleDB cluster for this example deployment)
    • sets up the node for HAProxy deployment in this parameter --nodes="haproxy://192.168.40.42?lb_user=myuser&lb_password=password&rw_splitting=true"
    • where haproxy://192.168.40.42 is the target node where HAProxy shall be installed
    • where lb_user=myuser&lb_password=password sets the username and password for administrative auth page.
    • where the rw_splitting=true tells ClusterControl to set read-write splitting that is enable on port 5433.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a HAProxy node is deployed and added in the cluster, ClusterControl will deploy HAProxy Exporter for Prometheus. You can get HAProxy monitoring state by going to ClusterControl GUI → Dashboards and choose HAProxy Overview in the dropdown. To manage the HAProxy service, go to ClusterControl GUI → Nodes → choose the HAProxy Nodes → HAProxy and it will open HAProxy stats page with possibility to enable or disable a backend server.

Keepalived

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose Keepalived

  5. First step is Configuration.

    1. Choose in the Load balancer type. Choose your target load balancer type and make sure it has to be more than 1 (one) to avoid the job from failing

    2. Under Security configuration, these are all checkboxes:

      1. Disable firewall is enabled. Leave check if you need to Disable firewall (default). Otherwise uncheck if you have filtered entry for your firewall.

      2. Disable SELinux/AppArmor is enabled. Leave check if you need to Disable SELinux/AppArmor. If you have uncheck Disable SELinux/AppArmor, make sure that you have configured properly to allow creating files and opening sockets for the database to run properly.

  6. Click Continue.

  7. Second step is Host Configuration.

    1. Choose in the Virtual IP. Fill in the IP address or hostname of your virtual IP that Keepalived will use for managing the VRRP. Make sure to hit enter in your keyboard to add the entry.

    2. Network Interface is your physical device ide where you want the VIP shall be assigned and where this physical device ID contains the IP or hostname you used for intercommunicating with your database nodes and ClusterControl networking.

  8. Click Continue to proceed the last step Preview.

  9. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  10. Click Finish button to deploy the job.

  • Add two Keepalived nodes (it is mandatory to have more than one load balancer nodes):

    s9s cluster \
        --cluster-id=108 \
        --add-node \
        --nodes="keepalived://192.168.40.43;keepalived://192.168.40.42" \
        --virtual-ip="192.168.40.231" \
        --eth=eth1 \
        --wait \
        --log
    
    • where --cluster-id=108 targets the cluster having the cluster ID of 108 (the TimescaleDB cluster for this example deployment)
    • sets up the node for Keepalived deployment in this parameter --nodes="keepalived://192.168.40.43;keepalived://192.168.40.42"
    • where keepalived://192.168.40.43;keepalived://192.168.40.42 is the target node swhere Keepalived shall be installed. Make sure it should be more than 1 (one) node.
    • where --virtual-ip="192.168.40.231" specifies the virtual IP to be set. In this example, it's 192.168.40.231.
    • where -eth=ETH1 sets the ethernet device ID points to enp0s8 in this example. The device ID enp0s8 is where the virtual IP address will be assigned to.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

PgBouncer

  1. Go to the Clusters dashboard. Choose your target source cluster and click the ellipsis button (...)

  2. Choose Add new → Load Balancer

  3. Choose Create balancer

  4. Choose PgBouncer

  5. First step is Authentication.

    1. PgBouncer admin user is your admin user. Fill in your desired admin user here.

    2. PgBouncer admin password is your admin password. Fill in your desired admin password here.

  6. Second step is Add nodes.

    1. PgBouncer node is the IP address/hostname of your target node. Choose from your existing node in your TimescaleDB cluster.

    2. Listen port defaults to 6432. Replace it if you want a unique port number assigned.

  7. Click Continue.

  8. Click Continue to proceed the last step Preview.

  9. Fourth and the last step is Preview, this is the last step where it shows all the values and choosen options for the ClusterControl to use for setting up the node.

  10. Click Finish button to deploy the job.

  • Add a PgBouncer node:

    s9s cluster \
        --cluster-id=108 \
        --add-node \
        --nodes="PgBouncer://192.168.40.42?service_name=pgbouncer_stage_env" \
        --wait \
        --log
    
    • where --cluster-id=108 targets the cluster having the cluster ID of 108 (the TimescaleDB cluster for this example deployment)
    • --add-node parameter to set the type of action to do
    • sets up the node for TimescaleDB deployment in this parameter PgBouncer://192.168.40.42?service_name=pgbouncer_stage_env"
    • where PgBouncer://192.168.40.420 is the target node where PgBouncer shall be installed. Only one node shall be installed at a time.
    • where service_name=pgbouncer_stage_env specifies the service that shall be used in systemd. It shall also be the basis when creating the config file pgbouncer.ini given that its path shall use /etc/pgbouncer_stage_env/pgbouncer.ini in this example.
    • where the --wait and --log as the name suggest to print the job logs while waiting.

Once a PgBouncer node is deployed and added in the cluster, ClusterControl will deploy PgBouncer Exporter for Prometheus. You can also view this by going to Dashboards tab → Load Balancer and choose PgBouncer Overview in your TimescaleDB cluster dashboard.