Percona XtraDB Cluster
In this guide, we will walk you through setting up your first database cluster running on Percona XtraDB Cluster with a ProxySQL load balancer using ClusterControl. By the end of this tutorial, you will have a fully operational database cluster that you can monitor, manage, and scale using ClusterControl.
Percona XtraDB Cluster (PXC) is an open-source, high-availability solution for MySQL, based on the Galera replication technology. It provides a synchronous multi-master replication system where all nodes in the cluster hold the same data, ensuring strong consistency and zero data loss. PXC is designed to deliver automatic failover, seamless scalability, and load balancing across multiple nodes, making it highly resilient to node failures. It includes features like automatic node provisioning, parallel replication, and integrated backup support, making it a robust solution for mission-critical MySQL environments that require high uptime and fault tolerance.
Prerequisites
Before you start, ensure that you have the following:
- ClusterControl should be installed and running. If not, follow the installation instructions in Quickstart or use the Installer Script.
- You need at least four hosts (bare-metal or virtual machines) installed with the supported operating system. See Operating System. One host is for database load balancer and another three hosts are for Percona XtraDB Cluster.
- SSH access to all servers involved.
- All nodes must have internet access during the deployment stage.
- NTP is configured and running on all hosts to ensure that all nodes in the cluster maintain synchronized time.
Architecture
Our final system architecture will look like this:
flowchart TB
a{{**Users/<br>clients/<br>apps**}} -->|RW| b[**lb1** <br>#40;ProxySQL#41;<br>192.168.99.100] --> d[(**db1**<br>Primary<br>192.168.99.101)] & e[(**db2**<br>Primary<br>192.168.99.102)] & f[(**db3**<br>Primary<br>192.168.99.103)]
d & e & f -.-|manages| y[/**ClusterControl server**<br>192.168.99.5/]
b -.- |manages|y
subgraph Galera Replication
d
e
f
end
Step 1: Set up SSH key-based authentication
-
On the ClusterControl server, generate a new SSH key as root user:
-
Copy the SSH public key to all target nodes (specify the password if prompted):
ssh-copy-id -i /root/.ssh/id_rsa [email protected] # lb1 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # db1 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # db2 ssh-copy-id -i /root/.ssh/id_rsa [email protected] # db3
If the target node only requires a key-based authentication, you can use the
-o
flag to customize the SSH option:ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # lb1 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # db1 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # db2 ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected] # db3
-
Perform some tests and make sure you can execute the following commands from the ClusterControl server without any password prompt:
ssh [email protected] "ls /root" ssh [email protected] "ls /root" ssh [email protected] "ls /root" ssh [email protected] "ls /root"
Step 2: Deploy a new cluster
-
Open your browser and go to the IP address or domain name where your ClusterControl is installed.
-
From the ClusterControl dashboard, click on the Deploy a cluster button in the top-right corner and choose Create a database cluster. This will open the Deploy cluster wizard.
-
Choose "MySQL Galera" from the Database dropdown, choose "Percona XtraDB" from the Vendor dropdown and also the database major version from the Version dropdown. Click Continue.
-
In the Deploy MySQL Galera Cluster wizard, configure the database cluster as below:
- Name: Enter a name for your cluster (e.g, My First Galera Cluster).
- Tags: Enter one or more tags separate by pressing "Enter" (e.g, production, galera, dc1).
- SSH user: root
- SSH user key path:
/root/.ssh/id_rsa
- SSH port: 22
- SSH sudo password: (leave it blank)
- Install sofware: On (default)
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Server port: 3306 (default)
- Server data directory:
/var/lib/mysql
(default) - Admin/Root user: root
- Admin/Root password: Enter the password for the database admin/root
- Repository: Use vendor repositories (default)
- Version: Choose a database minor version.
- Configurion template: my.cnf.80-pxc (default)
- Enable SSL encryption: On (default)
- Galera node:
- Specify "192.168.99.101" and press "Enter", followed by "192.168.99.102" and press "Enter", followed by ""192.168.99.103" and press Enter.
- Wait until everything is green. If red notice appears, inspect the error seen by ClusterControl and fix it.
- You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
- Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
To deploy the cluster, click Finish.
-
ClusterControl will now start provisioning the Percona XtraDB Cluster. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the cluster will appear in the Home page.
Step 3: Monitor your cluster
After the cluster has been deployed, you will be redirected to the Home page. This page provides an at-a-glance view of your cluster's status and important aspects of a database, including:
- Cluster health: The Home page provides the cluster state
- Node health: Rollover on the honeycomb diagram and get a more detailed view of a particular node. You can also see more detailed histograms under ClusterControl GUI → Clusters → choose the cluster → Dashboards.
- Recent alarms: View most recent alarms of your cluster.
- Automatic recovery status: Green means ClusterControl will raise an alarm and perform recovery procedures if it detects the database service is down. Red means ClusterControl will raise an alarm but the recovery procedures will be skipped.
- Topology viewer: Roll over on each cluster entry to see the summary of the cluster visualized in topology view to better understand your overall database state. Also available under ClusterControl GUI → Clusters → choose the cluster → Nodes → Topology.
Step 4: Import data
To import data into a Percona XtraDB Cluster (PXC), you need to take into account that PXC uses synchronous replication and all changes are replicated across all nodes. Performing restore using mysqldump
or LOAD DATA INFILE
will have a significant impact to the cluster and it is a slow operation since all inserts will be cerfified by Galera write-set replication. To avoid this, you should perform the restore on one of the database nodes.
Here are the steps to safely import data into a PXC cluster using ClusterControl:
-
Stop two out of three nodes. Firstly, stop db2. Go to ClusterControl GUI → Nodes → locate db2 (1) → Actions → Stop node → Stop.
- Find the node with
Hostname: 192.168.99.102
,Port: 3306
,Status: Operational
,Type: XtraDB
andRole: Primary
from the nodes list.
- Find the node with
-
Next, stop db3. Go to ClusterControl GUI → Nodes → locate db3 (1) → Actions → Stop node → Stop.
- Find the node with
Hostname: 192.168.99.103
,Port: 3306
,Status: Operational
,Type: XtraDB
andRole: Primary
from the nodes list.
- Find the node with
-
At this point, the database should be running as a single-node cluster with only db1 is running. Depending on the format of your data, you can use different MySQL utilities to import data:
If you are importing from a dump file, transfer the dump file to db1 and use the following command to restore:
-
Once imported, start db2 using Resync Node. This will trigger a full resyncing job using State Snapshot Tranfer (SST). Go to ClusterControl GUI → Nodes → locate db2 (1) → Actions → Resync node → Resync.
- Find the node with
Hostname: 192.168.99.102
,Port: 3306
,Status: Shut Down
,Type: XtraDB
andRole: Primary
from the nodes list.
- Find the node with
-
Start db3 using Resync Node. This will trigger a full resyncing job using State Snapshot Tranfer (SST). Go to ClusterControl GUI → Nodes → locate db3 (1) → Actions → Resync node 🡒 Resync.
- Find the node with
Hostname: 192.168.99.103
,Port: 3306
,Status: Shut Down
,Type: XtraDB
andRole: Primary
from the nodes list.
- Find the node with
-
Once resynced, verify that all nodes should be green from the Home page. The import exercise is complete.
Step 5: Deploy a new load balancer
Using ProxySQL in front of a Percona XtraDB Cluster enhances performance, scalability, and high availability by providing intelligent query routing, load balancing, connection pooling, failover handling, and security. It allows you to fully leverage the power of cluster by managing traffic efficiently while ensuring the application remains connected even during node failures or maintenance events.
-
Go to ClusterControl GUI → Clusters → locate the cluster → Actions → Add new → Load balancer → Create balancer → ProxySQL → Continue.
-
In the Create a load balancer wizard, configure the ProxySQL as below:
- Version: 2.x
- Server address: Specify "192.168.99.100" and press "Enter". Wait until everything is green. If red notice appears, inspect the error seen by ClusterControl and fix it.
- Admin port: 6032 (default)
- Listening port: 6033 (default)
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Import configuration: Unchecked (default)
-
Use native clustering: Unchecked (default) (1)
- Check this box if you want to deploy two or more ProxySQL instances. Or, if you plan to add more ProxySQL instance in the future.
- Administration user: proxysql-admin (default)
- Administration password: Specify the password for ProxySQL admin user. ClusterControl will create this user.
- Monitor user: proxysql-monitor (default)
- Monitor password: Specify the password for ProxySQL monitor user. ClusterControl will create this user.
- Create new user: Selected
- Database username: Specify the database username that will be load balanced via ProxySQL. ClusterControl will create this user.
- Database password: Specify the database password for the database username above.
- Database name: Specify the database and/or table to be granted for this database user. If you want to grant this user to all tables for database example, specify
example.*
. - MySQL privilege(s): ALL PRIVILEGES
- 192.168.99.101:3306 (Primary): On (default)
- 192.168.99.102:3306 (Primary): On (default)
- 192.168.99.103:3306 (Primary): On (default)
- Leave the Max replication lag, Max connection and Weight as default.
-
Are you using implicit transactions?: Off (default) (1)
- Toggle on if the application relies on
SET autocommit=0
to create a transaction for you.
- Toggle on if the application relies on
- Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
-
To deploy the load balancer, click Finish.
-
ClusterControl will now start provisioning the ProxySQL instance. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the load balancer instance will appear in the Nodes page.
Tips
For production environment, it is recommended to have at least two load balancers to eliminate single-point of failure. You can deploy another ProxySQL instance and tie them together with a virtual IP address. See Load Balancers.
Step 6: Connecting to the database cluster
At this point, your application should be able to access the database cluster via the following endpoints:
- Load balancer instance, 192.168.99.100 on port 6033. This is a load-balanced MySQL connection via ProxySQL. Use the database user credentials that you created at Step 5.
- If there are two or more load balancers, tie them together with a virtual IP address provided by Keepalived. Use the virtual IP address in your connection string instead of the load balancers' primary IP address.
- One of the database nodes on port 3306. However, you need to create a new database user using ClusterControl. A user created during the ProxySQL deployment stage is only allowed from the ProxySQL host.
Step 7: Enable automatic backups
To ensure your cluster data is protected, you should schedule an automatic backup. For Percona XtraDB Cluster, ClusterControl supports both logical and physical backups. For a small dataset, a daily logical backup should be sufficient since by default, ClusterControl configures every database server with binary logs enabled that can be used for point-in-time recovery.
- Go to ClusterControl GUI → choose the cluster → Backups to open the backup overview page of the cluster.
- Go to Create Backup → Schedule a Backup to schedule an automated backup. It will open a backup wizard to follow.
-
In the Create a backup schedule wizard, configure the backup as below:
- Schedule name: Daily backup - mysqldump
- Cluster: (leave as it is)
- Backup host: db3
- Backup method: mysqldump
- Dump type: Complete PITR-compatible
- Upload backup to cloud: Off
- Compression: On (default)
- Compression level: 6
- Enable encryption: On
- Verify backup: Off (default)
- Retention: On (default)
- Use extended insert: On (default)
- Failover backup if node is down: On
- Storage location: Store on controller (default)
- Storage directory: /root/backups (default)
- Backup subdirectory: BACKUP-%I (default)
- Set backup schedule: Simple
- Every: day at 2:00
- Select timezone: Use your preferred timezone
- Review the summary of the backup configuration. You can always go back to any previous section to modify your configurations if you wish. The backup configuration settings will be kept until you exit the wizard.
-
Click Create to schedule the backup.
Once configured, ClusterControl will take care of the automated backups, ensuring your data is safe. ClusterControl also support advanced backup features like partial backup, backup encryption, backup verification, backup throttling, back up to cloud and point-in-time recovery. All successful backups will be listed in the Backups → All Backups page, where you can see the backup logs, size and perform restoration if necessary.
Step 8: Configure alerts
To keep track of any issues or incidents in your cluster, it's important to set up alerting. ClusterControl supports sending alarms and alerts to email, web hooks and third-party notification services like Slack or Telegram. In this example, we are going to use email.
Firstly, configure the mail server. Go to ClusterControl GUI → Settings → You don't have a mail server configured. Configure now → SMTP Server. Fill up all necessary information about the SMTP server. You can also opt for Sendmail, however a mail transfer agent (sendmail, postfix or exim) must be installed on the ClusterControl server.
Once configured, we can configure the alert and recipients as below:
- Go to ClusterControl GUI → choose the cluster → Settings → Email Notifications.
- Choose a user group where you are belong to from the User group dropdown.
- Choose your email address to from the Users in the selected group dropdown.
- Click Enable.
- Set the Digest delivery time when you want a digested (summarized events) to be sent to you every day.
- Set all Critical events to "Deliver" (default), all Warning events to "Digest" and you may ignore the Info events.
This ensures you are always notified when critical issues arise and can act quickly to resolve them. You will also get an email with summarized warning events every day at the configured delivery time.
Tip
You can also configure alarms to be sent to third-party notification systems (Slack, Telegram), incident management systems (PagerDuty, ServiceNow, OpsGenie) or web hooks. See Integration → Notification Services.
Step 9: Manage your cluster and load balancer
ClusterControl provides many other features for managing your cluster, including:
- Database upgrades and patching: Perform rolling upgrades to keep your cluster up to date without downtime. This feature is available at ClusterControl GUI → choose the cluster → Actions → Upgrades.
- Database node management: Perform start/stop/restart node, reboot host, bootstrap cluster, activate/deactivate read-only, resync node, change primary source and rebuild replica. These features are available at ClusterControl GUI → choose the cluster → Nodes → Actions.
- ProxySQL management: Perform ProxySQL management related to query rules, query cache, proxy user management, export configuration, synchronize instances (in cluster mode), restart service and reboot host. These features are available at ClusterControl GUI → choose the cluster → Nodes → Actions and ClusterControl GUI → choose the cluster → Nodes → ProxySQL.
- Configuration management: Perform database configuration changes globally or on individual database node. This feature is available at ClusterControl GUI → choose the cluster → Manage 🡒 Configuration.
- Database user management: Manage database users and privileges globally. This feature is available at ClusterControl GUI → choose the cluster → Manage → DB Users.
- Backup management: Create, schedule, restore, perform point-in-time recovery, upload backup to cloud, verify an existing backup, and set retention period. These features are available at ClusterControl GUI → choose the cluster → Backups → Actions and ClusterControl GUI → choose the cluster → Backups → More.
- Maintenance management: Activate, deactivate, remark and schedule maintenance mode for all nodes. This feature is available at ClusterControl GUI → choose the cluster → Nodes → Actions → Schedule maintenance.
- SSH console: Access your nodes directly from ClusterControl GUI via web SSH console. This feature is available at ClusterControl GUI → choose the cluster → Nodes → Actions → SSH Console.
Step 10: Scale your cluster
One of the key benefits of using ClusterControl is the ease of scaling your cluster. For Percona XtraDB Cluster, you can scale out the cluster with another database node or a read-only replica (delayed or real-time replication and asynchronous or semi-synchronous replication). Common use cases are to use the replica node for a dedicated read-only workloads, backups, reporting or analytics purposes.
Adding a new database node or replica node is an online and non-blocking operation. It shall not cause downtime to the running cluster however expect increasing load on the source node if you stream directly from the primary. You may opt to stream from an existing backup to minimize this risk.
ClusterControl assumes that the new node that you want to add meets the requirements as decribed under Prerequisites and configured with a proper SSH key-based authentication as shown under Step 1: Set up SSH key-based authentication.
-
To add a new replica, go to ClusterControl GUI → choose the cluster → Actions → Add new → Replica node → Create a replica node.
-
In the Create a replica node wizard, configure the following:
- Port: 3306
- Data directory: /var/lib/mysql
- Configuration template: my.cnf.repl80
- Install software: On (default)
- Disable firewall: Checked (default)
- Disable SELinux/AppArmor: Checked (default)
- Rebuild from a backup: Off (default)
- Include in load balancer set (if exists): On
- Delay the replica node: Off (default)
- Primary node: Choose one of the primary node from the dropdown.
- Node: Specify the IP address or hostname of the node that you want to add and press "Enter".
- Review the summary of the deploymnent. You can always go back to any previous section to modify your configurations if you wish. The deployment configuration settings will be kept until you exit the wizard.
-
Click Finish to trigger the deployment job.
-
ClusterControl will start provisioning the new replica node. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the node will appear in the Nodes page.
Conclusion
Congratulations! You’ve successfully deployed, monitored, managed and scaled your first Percona XtraDB Cluster using ClusterControl. From here, you can explore additional features such as failover management, advanced monitoring, advanced backup management, database restore, cluster cloning and many more. As you grow, ClusterControl will help ensure that your database infrastructure remains highly available and performant.