MariaDB Galera Cluster
This guide will lead you through the process of setting up your first MariaDB Galera Cluster, complete with HAProxy load balancers and Keepalived for high availability, using ClusterControl. By following this tutorial, you'll have a fully functional database cluster that you can effortlessly monitor, manage, and scale.
MariaDB Galera Cluster is an open-source solution designed for high availability in MariaDB environments. It leverages Galera replication technology to offer synchronous multi-master replication, ensuring data consistency across all nodes and preventing data loss. The cluster supports automatic failover, seamless scaling, and effective load balancing, providing resilience against node failures. With features such as automatic node provisioning, parallel replication, and built-in backup options, it’s an excellent choice for critical MariaDB deployments that demand maximum uptime.
See also
Prerequisites
Before proceeding, ensure the following requirements are met:
- ClusterControl should be installed and running. If not, follow the installation instructions in Quickstart or use the Installer Script.
- You need at least five hosts (bare-metal or virtual machines) installed with the supported operating system. See Operating System. Two hosts are for database load balancer and another three hosts are for MariaDB Galera Cluster.
- SSH access to all servers involved.
- All nodes must have internet access during the deployment stage.
- NTP is configured and running on all hosts to ensure that all nodes in the cluster maintain synchronized time.
Architecture
Our final system architecture will look like this:
flowchart TB
a{{**Users/<br>clients/<br>apps**}} -->|RW| x[[VIP:192.168.99.100]] -.-> b[**lb1** <br>#40;HAProxy + Keepalived#41;<br>192.168.99.101] & c[**lb2** <br>#40;HAProxy + Keepalived#41;<br>192.168.99.102]--> d[(**db1**<br>Primary<br>192.168.99.111)] & e[(**db2**<br>Primary<br>192.168.99.112)] & f[(**db3**<br>Primary<br>192.168.99.113)]
b & c & d & e & f -.-|manages| y[/**ClusterControl server**<br>192.168.99.5/]
subgraph Galera Replication
d
e
f
end
Step 1: Set up SSH key-based authentication
-
On the ClusterControl server, generate a new SSH key as root user:
-
Copy the SSH public key to all target nodes (specify the password if prompted):
ssh-copy-id -i /root/.ssh/id_rsa [email protected] # lb1 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # lb2 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # db1 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # db2 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # db3
If the target node only requires a key-based authentication, you can use the
-o
flag to customize the SSH option:ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # lb1 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # lb2 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # db1 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # db2 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # db3
-
Perform some tests and make sure you can execute the following commands from the ClusterControl server without any password prompt:
ssh [email protected] "ls /root" ssh [email protected] "ls /root" ssh [email protected] "ls /root" ssh [email protected] "ls /root" ssh [email protected] "ls /root"
Step 2: Deploy a new cluster
-
Open your browser and go to the IP address or domain name where your ClusterControl is installed.
-
From the ClusterControl dashboard, click on the Deploy a cluster button in the top-right corner and choose Create a database cluster. This will open the Deploy cluster wizard.
-
Choose "MySQL Galera" from the Database dropdown, choose "MariaDB" from the Vendor dropdown and also the database major version from the Version dropdown. Click Continue.
-
In the Deploy MySQL Galera Cluster wizard, configure the database cluster as below:
- Name: Enter a name for your cluster (e.g, My First Galera Cluster).
- Tags: Enter one or more tags separate by pressing "Enter" (e.g, production, galera, dc1).
- SSH user: root
- SSH user key path:
/root/.ssh/id_rsa
- SSH port: 22
- SSH sudo password: (leave it blank)
- Install sofware: On (default)
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Server port: 3306 (default)
- Server data directory:
/var/lib/mysql
(default) - Admin/Root user: root
- Admin/Root password: Enter the password for the database admin/root
- Repository: Use vendor repositories (default)
- Version: Choose a database minor version.
- Configurion template: my.cnf.mdb106+-galera (default)
- Enable SSL encryption: On (default)
- Galera node:
- Specify "192.168.99.111" and press Enter, followed by "192.168.99.112" and press Enter, followed by ""192.168.99.113" and press Enter.
- Wait until everything is green. If red notice appears, inspect the error seen by ClusterControl and fix it.
- You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
- Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
To deploy the cluster, click Finish.
-
ClusterControl will now start provisioning the MariaDB Galera Cluster. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the cluster will appear in the Home page.
Step 3: Monitor your cluster
After the cluster has been deployed, you will be redirected to the Home page. This page provides an at-a-glance view of your cluster's status and important aspects of a database, including:
- Cluster health: The Home page provides the cluster state
- Node health: Rollover on the honeycomb diagram and get a more detailed view of a particular node. You can also see more detailed histograms under ClusterControl GUI → Clusters → choose the cluster → Dashboards.
- Recent alarms: View most recent alarms of your cluster.
- Automatic recovery status: Green means ClusterControl will raise an alarm and perform recovery procedures if it detects the database service is down. Red means ClusterControl will raise an alarm but the recovery procedures will be skipped.
- Topology viewer: Roll over on each cluster entry to see the summary of the cluster visualized in topology view to better understand your overall database state.
Step 4: Import data
When importing data into a MariaDB Galera Cluster, consider the impact of synchronous replication. For large datasets, it’s recommended to import data on a single node and then resynchronize the cluster.
Here are the steps to safely import data into a MariaDB Galera cluster using ClusterControl:
-
Stop two out of three nodes. Firstly, stop db2. Go to ClusterControl GUI → Nodes → locate db2 (1) → Actions → Stop node → Stop.
- Find the node with
Hostname: 192.168.99.112
,Port: 3306
,Status: Operational
,Type: MariaDB
andRole: Primary
from the nodes list.
- Find the node with
-
Next, stop db3. Go to ClusterControl GUI → Nodes → locate db3 (1) → Actions → Stop node → Stop.
- Find the node with
Hostname: 192.168.99.113
,Port: 3306
,Status: Operational
,Type: MariaDB
andRole: Primary
from the nodes list.
- Find the node with
-
At this point, the database should be running as a single-node cluster with only db1 is running. Depending on the format of your data, you can use different MySQL utilities to import data:
If you are importing from a dump file, transfer the dump file to db1 and use the following command to restore:
-
Once imported, start db2 using Resync Node. This will trigger a full resyncing job using State Snapshot Tranfer (SST). Go to ClusterControl GUI → Nodes → locate db2 (1) → Actions → Resync node → Resync.
- Find the node with
Hostname: 192.168.99.112
,Port: 3306
,Status: Shut Down
,Type: MariaDB
andRole: Primary
from the nodes list.
- Find the node with
-
Start db3 using Resync Node. This will trigger a full resyncing job using State Snapshot Tranfer (SST). Go to ClusterControl GUI → Nodes → locate db3 (1) → Actions → Resync node 🡒 Resync.
- Find the node with
Hostname: 192.168.99.113
,Port: 3306
,Status: Shut Down
,Type: MariaDB
andRole: Primary
from the nodes list.
- Find the node with
-
Once resynced, verify that all nodes should be green from the Home page. The import exercise is complete.
Step 5: Deploy a new load balancer
Integrating HAProxy and Keepalived with MariaDB Galera Cluster enhances performance, load balancing, and high availability.
-
Go to ClusterControl GUI → Clusters → locate the cluster → Actions → Add new → Load balancer → Create balancer → HAProxy → Continue.
-
In the Create a load balancer wizard, configure the HAProxy as below:
- Server address: Specify "192.168.99.101" and press Enter. Wait until everything is green. If red notice appears, inspect the error seen by ClusterControl and fix it.
-
Policy: leastconn (default) (1)
- You can choose the policy that suits your requirement.
-
Listen port (read/write): 3307 (default)
- Install for read/write splitting (master-slave replication): Off (default)
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Import configuration: Unchecked (default)
- Overwrite existing /usr/local/sbin/mysqlchk on target: On (default)
- Stats socket: /var/run/haproxy.socket (default)
- Admin port: 9600 (default)
- Admin user: admin
- Admin password: Enter the password for the Admin user
- Backend name (RW): haproxy_3307_rw (default)
- Backend name (RO): haproxy_3308_ro (default)
- Timeout server (in seconds): 10800 (default)
- Timeout client (in seconds): 10800 (default)
- Max connections frontend: 8192 (default)
- Max connections backend (per instance): 64 (default)
- xinted allow connections from: 0.0.0.0/0 (default)
- 192.168.99.111:3306 (Primary): On (default)
- Advance options
- Role: Active (default)
- Connection address: External Address [192.168.99.111]
- Advance options
- 192.168.99.112:3306 (Primary): On (default)
- Advance options
- Role: Active (default)
- Connection address: External Address [192.168.99.112]
- Advance options
- 192.168.99.113:3306 (Primary): On (default)
- Advance options
- Role: Active (default)
- Connection address: External Address [192.168.99.113]
- Advance options
- Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
Repeat the same steps to add second HAProxy instance, 192.168.99.102 (lb2).
-
To deploy the load balancer, click Finish.
-
ClusterControl will now start provisioning the HAProxy instance. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, proceed to the next step to add Keepalived.
-
Go to ClusterControl GUI → Clusters → locate the cluster → Actions → Add new → Load balancer → Create balancer → Keepalived → Continue.
-
In the Create a load balancer wizard, configure the Keepalived as below:
- Load balancer type: Select HAProxy
-
Keepalived: Select 192.168.99.101:9600 and 192.168.99.102:9600. Ensure everything is green.(1)
- Two HAProxy instances are required to configure the Keepalived.
-
Security configuration
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Virtual IP: Enter the Virtual IP, as in this case is 192.168.99.100.
- Network interface: Specify the network interface for the Virtual IP. In this case is eth0.
- Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
ClusterControl will now start provisioning the Keepalived instance. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the load balancer instance will appear in the Nodes page.
-
To check the HAProxy status, in the Nodes page click on the HAProxy under Type column and it will open the Node details page for the selected HAProxy. Verify everything is green here.
-
The load balancer exercise is complete.
Tips
For production environment, it is recommended to have at least two load balancers to eliminate single-point of failure. See Load Balancers.
Step 6: Connecting to the database cluster
At this point, your application should be able to access the database cluster via the following endpoints:
- Virtual IP (VIP) managed by Keepalived, 192.168.99.100 on port 3307. Use the virtual IP address in your connection string instead of the load balancers' primary IP address. This VIP handles all read and write traffic to the database nodes, ensuring seamless failover and high availability.
- Direct connection to a database node on port 3306.
With Keepalived managing read and write traffic through the VIP, applications can seamlessly interact with the database without needing to handle individual HAProxy instances. This setup significantly enhances redundancy and ensures that the database remains accessible at all times.
Step 7: Enable automatic backups
To protect your data, set up automatic backups. ClusterControl supports both logical and physical backups for MariaDB Galera Cluster. For a small dataset, a daily logical backup should be sufficient since by default, ClusterControl configures every database server with binary logs enabled that can be used for point-in-time recovery.
- Go to ClusterControl GUI → choose the cluster → Backups to open the backup overview page of the cluster.
- Go to Create Backup → Schedule a Backup to schedule an automated backup. It will open a backup wizard to follow.
-
In the Create a backup schedule wizard, configure the backup as below:
- Schedule name: Daily backup - mysqldump
- Cluster: (leave as it is)
- Backup host: db3
- Backup method: mysqldump
- Dump type: Complete PITR-compatible
- Upload backup to cloud: Off
- Compression: On (default)
- Compression level: 6
- Enable encryption: On
- Verify backup: Off (default)
- Retention: On (default)
- Use extended insert: On (default)
- Failover backup if node is down: On
- Storage location: Store on controller (default)
- Storage directory: /root/backups (default)
- Backup subdirectory: BACKUP-%I (default)
- Set backup schedule: Simple
- Every: day at 2:00
- Select timezone: Use your preferred timezone
- Review the summary of the backup configuration. You can always go back to any previous section to modify your configurations if you wish. The backup configuration settings will be kept until you exit the wizard.
-
Click Create to schedule the backup.
Once configured, ClusterControl will take care of the automated backups, ensuring your data is safe. ClusterControl also support advanced backup features like partial backup, backup encryption, backup verification, backup throttling, back up to cloud and point-in-time recovery.
Step 8: Configure alerts
To keep track of any issues or incidents in your cluster, it's important to set up alerting. ClusterControl supports sending alarms and alerts to email, web hooks and third-party notification services like Slack or Telegram. In this example, we are going to use email.
Firstly, configure the mail server. Go to ClusterControl GUI → Settings → You don't have a mail server configured. Configure now → SMTP Server. Fill up all necessary information about the SMTP server. You can also opt for Sendmail, however a mail transfer agent (sendmail, postfix or exim) must be installed on the ClusterControl server.
Once configured, we can configure the alert and recipients as below:
- Go to ClusterControl GUI → choose the cluster → Settings → Email Notifications.
- Choose a user group where you are belong to from the User group dropdown.
- Choose your email address to from the Users in the selected group dropdown.
- Click Enable.
- Set the Digest delivery time when you want a digested (summarized events) to be sent to you every day.
- Set all Critical events to "Deliver" (default), all Warning events to "Digest" and you may ignore the Info events.
This ensures you are always notified when critical issues arise and can act quickly to resolve them. You will also get an email with summarized warning events every day at the configured delivery time.
Tip
You can also configure alarms to be sent to third-party notification systems (Slack, Telegram), incident management systems (PagerDuty, ServiceNow, OpsGenie) or web hooks. See Integration → Notification Services.
Step 9: Manage your cluster and load balancer
ClusterControl provides many other features for managing your cluster, including:
- Database upgrades and patching: Perform rolling minor version upgrades to keep your cluster up to date without downtime. This feature is available at ClusterControl GUI → choose the cluster → Actions → Upgrades.
- Database node management: Perform start/stop/restart node, reboot host, bootstrap cluster, activate/deactivate read-only, resync node, change primary source and rebuild replica. These features are available at ClusterControl GUI → choose the cluster → Nodes → Actions.
- HAProxy management: Perform HAProxy management related disable nodes, see the detailed status, restart service and reboot host. These features are available at ClusterControl GUI → choose the cluster → Nodes → Actions and ClusterControl GUI → choose the cluster → Nodes → HAProxy.
- Configuration management: Perform database configuration changes globally or on individual database node. This feature is available at ClusterControl GUI → choose the cluster → Manage 🡒 Configuration.
- Database user management: Manage database users and privileges globally. This feature is available at ClusterControl GUI → choose the cluster → Manage → DB Users.
- Backup management: Create, schedule, restore, perform point-in-time recovery, upload backup to cloud, verify an existing backup, and set retention period. These features are available at ClusterControl GUI → choose the cluster → Backups → Actions and ClusterControl GUI → choose the cluster → Backups → More.
- Maintenance management: Activate, deactivate, remark and schedule maintenance mode for all nodes. This feature is available at ClusterControl GUI → choose the cluster → Nodes → Actions → Schedule maintenance.
- SSH console: Access your nodes directly from ClusterControl GUI via web SSH console. This feature is available at ClusterControl GUI → choose the cluster → Nodes → Actions → SSH Console.
Step 10: Scale your cluster
One of the key benefits of using ClusterControl is the ease of scaling your cluster and scaling your MariaDB Galera Cluster is straightforward. You can scale out the cluster with another database node or a read-only replica (delayed or real-time replication and asynchronous or semi-synchronous replication). Common use cases are to use the replica node for a dedicated read-only workloads, backups, reporting or analytics purposes.
Adding a new database node or replica node is an online and non-blocking operation. It shall not cause downtime to the running cluster however expect increasing load on the source node if you stream directly from the primary. You may opt to stream from an existing backup to minimize this risk.
ClusterControl assumes that the new node that you want to add meets the requirements as decribed under Prerequisites and configured with a proper SSH key-based authentication as shown under Step 1: Set up SSH key-based authentication.
-
To add a new replica, go to ClusterControl GUI → choose the cluster → Actions → Add new → Replica node → Create a replica node.
-
In the Create a replica node wizard, configure the following:
- Port: 3306
- Data directory: /var/lib/mysql
- Configuration template: my.cnf.repl80
- Install software: On (default)
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Rebuild from a backup: Off (default)
- Include in load balancer set (if exists): On
- Delay the replica node: Off (default)
- Primary node: Choose one of the primary node from the dropdown.
- Node: Specify the IP address or hostname of the node that you want to add and press Enter.
- Review the summary of the deploymnent. You can always go back to any previous section to modify your configurations if you wish. The deployment configuration settings will be kept until you exit the wizard.
-
Click Finish to trigger the deployment job.
-
ClusterControl will start provisioning the new replica node. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the node will appear in the Nodes page.
Conclusion
Congratulations! You've successfully deployed, monitored, managed, and scaled your first MariaDB Galera Cluster using ClusterControl. Explore additional features like failover management, advanced monitoring, backup management, and more to optimize your database infrastructure for high availability and performance.