Skip to content

MySQL Replication (MariaDB, Percona, Oracle)

This guide will lead you through the process of setting up your first MySQL replication topology (primary-replica), complete with ProxySQL load balancers and Keepalived for high availability, using ClusterControl. By following this tutorial, you'll have a fully functional database cluster that you can effortlessly monitor, manage, and scale.

MySQL Replication is a well-established feature that enables data from a primary (also known as a source) MySQL server to be copied automatically to one or more replicas. This allows you to scale out read workloads, offload backups and reporting, and provide basic high availability with manual or automatic failover strategies. This tutorial applies to any of the following MySQL distributions:

  • MariaDB
  • Percona
  • Percona Pro
  • Oracle
See also

Prerequisites

Before proceeding, ensure the following requirements are met:

  • ClusterControl is installed and running. If not, follow the instructions in Quickstart or use the Installer Script.
  • At least four hosts (bare-metal or virtual machines) with a supported OS:
    • One host for the primary MySQL server.
    • Two hosts for the replica MySQL servers.
    • Two hosts for the load balancer (ProxySQL).
  • SSH access to all servers involved.
  • All nodes must have internet access during the deployment stage.
  • NTP is configured and running on all hosts to keep their clocks synchronized.

Architecture

Below is a simplified diagram of the final architecture:

flowchart TB
   a{{**Users/<br>clients/<br>apps**}} -->|RW| x[[VIP:192.168.99.100]] -.-> b[**lb1** <br>#40;ProxySQL + Keepalived#41;<br>192.168.99.101] & c[**lb2** <br>#40;ProxySQL + Keepalived#41;<br>192.168.99.102]-->|RW| d[(**db1**<br>Primary<br>192.168.99.111)] 
   b & c -->|RO| e[(**db2**<br>Replica<br>192.168.99.112)] & f[(**db3**<br>Replica<br>192.168.99.113)]
   d -.->|replicates| e & f
   b & c & d & e & f -.-|manages| y[/**ClusterControl server**<br>192.168.99.5/]
    subgraph MySQL Replication
        d
        e
        f
    end

Step 1: Set up SSH key-based authentication

  1. On the ClusterControl server, generate a new SSH key as root user:

    ssh-keygen -t rsa
    
  2. Copy the SSH public key to all target nodes (specify the password if prompted):

    ssh-copy-id -i /root/.ssh/id_rsa [email protected]  # lb1
    ssh-copy-id -i /root/.ssh/id_rsa [email protected]  # lb2
    ssh-copy-id -i /root/.ssh/id_rsa [email protected]  # db1
    ssh-copy-id -i /root/.ssh/id_rsa [email protected]  # db2
    ssh-copy-id -i /root/.ssh/id_rsa [email protected]  # db3
    

    If the target node only requires a key-based authentication, you can use the -o flag to customize the SSH option:

    ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected]  # lb1
    ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected]  # lb2
    ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected]  # db1
    ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected]  # db2
    ssh-copy-id -i /root/.ssh/id_rsa -p 22 -o 'IdentityFile /root/myprivatekey.pem' [email protected]  # db3
    
  3. Perform some tests and make sure you can execute the following commands from the ClusterControl server without any password prompt:

    ssh [email protected] "ls /root"
    ssh [email protected] "ls /root"
    ssh [email protected] "ls /root"
    ssh [email protected] "ls /root"
    ssh [email protected] "ls /root"
    

Step 2: Deploy a new cluster

  1. Open your browser and navigate to your ClusterControl server’s IP or domain name.

  2. From the ClusterControl dashboard, click on the Deploy a cluster button in the top-right corner and choose Create a database cluster. This will open the Deploy cluster wizard.

  3. Choose "MySQL Replication" from the Database dropdown, choose "Oracle" or your preferred vendor from the Vendor dropdown and also the database major version from the Version dropdown. Click Continue.

  4. In the Deploy MySQL Replication wizard, configure the database cluster as below:

    • Name: Enter a name for your cluster (e.g, My First Replication Cluster).
    • Tags: Enter one or more tags separate by pressing "Enter" (e.g, production, replication, dc1).
    • SSH user: root
    • SSH user key path: /root/.ssh/id_rsa
    • SSH port: 22
    • SSH sudo password: (leave it blank)
    • Install sofware: On (default)
    • Disable firewall: Checked (default)
    • Disable SELinux/AppArmor: Checked (default)
    • Server port: 3306 (default)
    • Server data directory: /var/lib/mysql (default)
    • Admin/Root user: root
    • Admin/Root password: Enter the password for the database admin/root
    • Repository: Use vendor repositories (default)
    • Version: Choose a database minor version.
    • Configurion template: my.cnf.replication (default)
    • Enable SSL encryption: On (default)
    • Galera node:
      • Specify "192.168.99.111" and press Enter, followed by "192.168.99.112" and press Enter, followed by ""192.168.99.113" and press Enter.
      • Wait until everything is green. If red notice appears, inspect the error seen by ClusterControl and fix it.
      • You can only proceed to the next step if all of the specified nodes are reachable (shown in green).
    • Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
  5. Click Finish to start the deployment.

  6. ClusterControl will provision your MySQL Replication cluster. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the cluster will appear in the Home page.

Step 3: Monitor your cluster

After the cluster has been deployed, you will be redirected to the Home page. This page provides an at-a-glance view of your cluster's status and important aspects of a database, including:

  • Cluster health: The Home page provides the cluster state
  • Node health: Rollover on the honeycomb diagram and get a more detailed view of a particular node. You can also see more detailed histograms under ClusterControl GUI → Clusters → choose the cluster → Dashboards.
  • Recent alarms: View most recent alarms of your cluster.
  • Automatic recovery status: Green means ClusterControl will raise an alarm and perform recovery procedures if it detects the database service is down. Red means ClusterControl will raise an alarm but the recovery procedures will be skipped.
  • Topology viewer: Roll over on each cluster entry to see the summary of the cluster visualized in topology view to better understand your overall database state.

Step 4: Import data

When importing data in a primary-replica (asynchronous) replication setup, you must import the data on the primary server, and it will replicate to all replicas.

  1. Import data on the primary (db1). Depending on your data format, you can use different utilities:

    mysql -u root -p -h database_name < dumpfile.sql
    
    LOAD DATA INFILE '/path/to/file.csv'
    INTO TABLE table_name
    FIELDS TERMINATED BY ',' ENCLOSED BY '"'
    LINES TERMINATED BY '\n';
    
  2. Check the cluster status in ClusterControl. Go to ClusterControl GUI → Clusters → locate the cluster → Nodes → Topology. Ensure db2 and db3 are both green (running) and in sync with db1.

  3. Once your data import finishes and replicas catch up, you can proceed with normal read/write operations on the primary, and replicas will stay up to date.

Step 5: Deploy a new load balancer

Using ProxySQL in front of a MySQL replication allows read-write splitting, enhances performance, scalability, and high availability by providing intelligent query routing, load balancing, connection pooling, failover handling, and security. It allows you to fully leverage the power of cluster by managing traffic efficiently while ensuring the application remains connected even during node failures or maintenance events.

  1. Go to ClusterControl GUI → Clusters → locate the cluster → Actions → Add new → Load balancer → Create balancer → ProxySQL → Continue.

  2. In the Create a load balancer wizard, configure the ProxySQL as below:

    • Version: 2.x
    • Server address: Specify "192.168.99.101" and press Enter. Wait until everything is green. If red notice appears, inspect the error seen by ClusterControl and fix it.
    • Admin port: 6032 (default)
    • Listening port: 6033 (default)
    • Disable firewall: Checked (default)
    • Disable SELinux/AppArmor: Checked (default)
    • Import configuration: Unchecked (default)
    • Use native clustering: Unchecked (default) (1)

      1. Check this box if you want to deploy two or more ProxySQL instances. Or, if you plan to add more ProxySQL instance in the future.
    • Administration user: proxysql-admin (default)
    • Administration password: Specify the password for ProxySQL admin user. ClusterControl will create this user.
    • Monitor user: proxysql-monitor (default)
    • Monitor password: Specify the password for ProxySQL monitor user. ClusterControl will create this user.
    • Create new user: Selected
    • Database username: Specify the database username that will be load balanced via ProxySQL. ClusterControl will create this user.
    • Database password: Specify the database password for the database username above.
    • Database name: Specify the database and/or table to be granted for this database user. If you want to grant this user to all tables for database example, specify example.*.
    • MySQL privilege(s): ALL PRIVILEGES
    • 192.168.99.111:3306 (Primary): On (default)
    • 192.168.99.112:3306 (Primary): On (default)
    • 192.168.99.113:3306 (Primary): On (default)
    • Leave the Max replication lag, Max connection and Weight as default.
    • Are you using implicit transactions?: Off (default) (1)

      1. Toggle on if the application relies on SET autocommit=0 to create a transaction for you.
    • Review the summary of your deployment. You can always go back to any previous section to modify your configurations if you wish. The deployment settings will be kept until you exit the deployment wizard.
  3. Repeat the same steps to add second HAProxy instance, 192.168.99.102 (lb2).

  4. To deploy the load balancer, click Finish.

  5. ClusterControl will now start provisioning the ProxySQL instance. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the load balancer instance will appear in the Nodes page.

Step 6: Connecting to the database cluster

At this point, your application should be able to access the database cluster via the following endpoints:

  • Load balancer instance, 192.168.99.100 on port 6033. This is a load-balanced MySQL connection via ProxySQL. Use the database user credentials that you created at Step 5.
  • If there are two or more load balancers, tie them together with a virtual IP address provided by Keepalived. Use the virtual IP address in your connection string instead of the load balancers' primary IP address.
  • One of the database nodes on port 3306. However, you need to create a new database user using ClusterControl. A user created during the ProxySQL deployment stage is only allowed from the ProxySQL host.

Step 7: Enable automatic backups

To protect your data, set up automatic backups. ClusterControl supports both logical and physical backups for MySQL replication. For a small dataset, a daily logical backup should be sufficient since by default, ClusterControl configures every database server with binary logs enabled that can be used for point-in-time recovery.

  1. Go to ClusterControl GUI → choose the cluster → Backups to open the backup overview page of the cluster.
  2. Go to Create Backup → Schedule a Backup to schedule an automated backup. It will open a backup wizard to follow.
  3. In the Create a backup schedule wizard, configure the backup as below:

    • Schedule name: Daily backup - mysqldump
    • Cluster: (leave as it is)
    • Backup host: db3
    • Backup method: mysqldump
    • Dump type: Complete PITR-compatible
    • Upload backup to cloud: Off
    • Compression: On (default)
    • Compression level: 6
    • Enable encryption: On
    • Verify backup: Off (default)
    • Retention: On (default)
    • Use extended insert: On (default)
    • Failover backup if node is down: On
    • Storage location: Store on controller (default)
    • Storage directory: /root/backups (default)
    • Backup subdirectory: BACKUP-%I (default)
    • Set backup schedule: Simple
    • Every: day at 2:00
    • Select timezone: Use your preferred timezone
    • Review the summary of the backup configuration. You can always go back to any previous section to modify your configurations if you wish. The backup configuration settings will be kept until you exit the wizard.
  4. Click Create to schedule the backup.

Once configured, ClusterControl will take care of the automated backups, ensuring your data is safe. ClusterControl also support advanced backup features like partial backup, backup encryption, backup verification, backup throttling, back up to cloud and point-in-time recovery.

Step 8: Configure alerts

To keep track of any issues or incidents in your cluster, it's important to set up alerting. ClusterControl supports sending alarms and alerts to email, web hooks and third-party notification services like Slack or Telegram. In this example, we are going to use email.

Firstly, configure the mail server. Go to ClusterControl GUI → Settings → You don't have a mail server configured. Configure now → SMTP Server. Fill up all necessary information about the SMTP server. You can also opt for Sendmail, however a mail transfer agent (sendmail, postfix or exim) must be installed on the ClusterControl server.

Once configured, we can configure the alert and recipients as below:

  1. Go to ClusterControl GUI → choose the cluster → Settings → Email Notifications.
  2. Choose a user group where you are belong to from the User group dropdown.
  3. Choose your email address to from the Users in the selected group dropdown.
  4. Click Enable.
  5. Set the Digest delivery time when you want a digested (summarized events) to be sent to you every day.
  6. Set all Critical events to "Deliver" (default), all Warning events to "Digest" and you may ignore the Info events.

This ensures you are always notified when critical issues arise and can act quickly to resolve them. You will also get an email with summarized warning events every day at the configured delivery time.

Tip

You can also configure alarms to be sent to third-party notification systems (Slack, Telegram), incident management systems (PagerDuty, ServiceNow, OpsGenie) or web hooks. See Integration → Notification Services.

Step 9: Manage your cluster and load balancer

ClusterControl provides many other features for managing your cluster, including:

  • Database upgrades and patching: Perform minor version rolling upgrades to keep your cluster up to date without downtime. This feature is available at ClusterControl GUI → choose the cluster → Actions → Upgrades.
  • Database node management: Perform start/stop/restart node, reboot host, bootstrap cluster, activate/deactivate read-only, resync node, change primary source and rebuild replica. These features are available at ClusterControl GUI → choose the cluster → Nodes → Actions.
  • ProxySQL management: Perform ProxySQL management related to query rules, query cache, proxy user management, export configuration, synchronize instances (in cluster mode), restart service and reboot host. These features are available at ClusterControl GUI → choose the cluster → Nodes → Actions and ClusterControl GUI → choose the cluster → Nodes → ProxySQL.
  • Configuration management: Perform database configuration changes globally or on individual database node. This feature is available at ClusterControl GUI → choose the cluster → Manage 🡒 Configuration.
  • Database user management: Manage database users and privileges globally. This feature is available at ClusterControl GUI → choose the cluster → Manage → DB Users.
  • Backup management: Create, schedule, restore, perform point-in-time recovery, upload backup to cloud, verify an existing backup, and set retention period. These features are available at ClusterControl GUI → choose the cluster → Backups → Actions and ClusterControl GUI → choose the cluster → Backups → More.
  • Maintenance management: Activate, deactivate, remark and schedule maintenance mode for all nodes. This feature is available at ClusterControl GUI → choose the cluster → Nodes → Actions → Schedule maintenance.
  • SSH console: Access your nodes directly from ClusterControl GUI via web SSH console. This feature is available at ClusterControl GUI → choose the cluster → Nodes → Actions → SSH Console.

Step 10: Scale your cluster

One of the key benefits of using ClusterControl is the ease of scaling your cluster and scaling your MySQL replication is straightforward. You can scale out the cluster with another database node or a read-only replica (delayed or real-time replication and asynchronous or semi-synchronous replication). Common use cases are to use the replica node for a dedicated read-only workloads, backups, reporting or analytics purposes.

Adding a new database node or replica node is an online and non-blocking operation. It shall not cause downtime to the running cluster however expect increasing load on the source node if you stream directly from the primary. You may opt to stream from an existing backup to minimize this risk.

ClusterControl assumes that the new node that you want to add meets the requirements as decribed under Prerequisites and configured with a proper SSH key-based authentication as shown under Step 1: Set up SSH key-based authentication.

  1. To add a new replica, go to ClusterControl GUI → choose the cluster → Actions → Add new → Replica node → Create a replica node.

  2. In the Create a replica node wizard, configure the following:

    • Port: 3306
    • Data directory: /var/lib/mysql
    • Configuration template: my.cnf.repl80
    • Install software: On (default)
    • Disable firewall: Checked (default)
    • Disable SELinux/AppArmor: Checked (default)
    • Rebuild from a backup: Off (default)
    • Include in load balancer set (if exists): On
    • Delay the replica node: Off (default)
    • Primary node: Choose one of the primary node from the dropdown.
    • Node: Specify the IP address or hostname of the node that you want to add and press Enter.
    • Review the summary of the deploymnent. You can always go back to any previous section to modify your configurations if you wish. The deployment configuration settings will be kept until you exit the wizard.
  3. Click Finish to trigger the deployment job.

  4. ClusterControl will start provisioning the new replica node. You can monitor the progress from Activity center, where you will see detailed logs of the deployment. This process can take a few minutes. Once the deployment is complete, the node will appear in the Nodes page.

Conclusion

Congratulations! You've successfully deployed, monitored, managed, and scaled your first MySQL replication cluster using ClusterControl. Explore additional features like failover management, advanced monitoring, backup management, and more to optimize your database infrastructure for high availability and performance.