1. Home
  2. Docs
  3. Knowledgebase and Tutorials
  4. ClusterControl
  5. Terraform provider for ClusterControl

Terraform provider for ClusterControl

The Terraform Provider for Severalnines ClusterControl offers a seamless integration between the powerful infrastructure management tool, Terraform, and the robust database management capabilities of ClusterControl.

This provider empowers users to effortlessly provision, configure, and manage database clusters through Infrastructure as Code (IaC), streamlining the deployment process and enhancing the efficiency of managing complex database environments.

With a range of customizable configurations and automation capabilities, this provider simplifies the orchestration of database infrastructure, enabling teams to focus on innovation and development rather than mundane operational tasks. Leveraging the flexibility and scalability of Terraform alongside the comprehensive management features of Severalnines ClusterControl, organizations can achieve greater agility, reliability, and scalability in their database operations.

This project is hosted in Github, terraform-provider-clustercontrol.

Requirements

Name Version
Terraform >= 0.13.x
ClusterControl >= 1.9.8

Providers

Name Version
Terraform ClusterControl Provider >= 0.1

Quick Start

Installing and configuring ClusterControl for API access

To get started, you may follow the following steps (if you already have ClusterControl running in your env, skip to #4):

1. Getting started with ClusterControl

2. ClusterControl installation requirements

3. Install ClusterControl

4. Configure ClusterControl – Enable ClusterControl for API access by editing /etc/default/cmon with a text editor and set the RPC_BIND_ADDRESSES as shown below:

RPC_BIND_ADDRESSES="10.0.0.15,127.0.0.1"

Where 10.0.0.5 is the private IP of the ClusterControl host. Restart the ClusterControl service to apply the changes:

sudo systemctl restart cmon

5. Run a quick test to make sure you can access ClusterControl via its REST API (using curl or Postman):

curl -k 'https://10.0.0.5:9501/v2/clusters' -XPOST -d '{"operation": "getAllClusterInfo", "authenticate": {"username": "CHANGE-ME","password": "CHANGE-ME"}}'

Where username and password are valid login credentials for ClusterControl. The output should be a load of JSON text returned with details of the clusters managed by ClusterControl.

Setting up the Terraform provider for ClusterControl

In your environment, create a terraform.tfvars file to store your ClusterControl secrets. In the file, add the following secrets:

cc_api_url="https://<cc-host-private_ip>:9501/v2"
cc_api_user="CHANGE-ME"
cc_api_user_password="CHANGE-ME"

Create a file (e.g main.tf) to store the ClusterControl Terraform provider configuration and the resources you will create. In the file, add the following configuration:

terraform {
  required_providers {
    clustercontrol = {
      source = "severalnines/clustercontrol"
      version = "0.2.16" // refer to terraform registry for latest version
    }
  }
}

variable "cc_api_url" {
  type = string
}

variable "cc_api_user" {
  type = string
}

variable "cc_api_user_password" {
  type = string
}

provider "clustercontrol" {
  cc_api_url = var.cc_api_url
  cc_api_user= var.cc_api_user
  cc_api_user_password= var.cc_api_user_password
}

The above Terraform code will install the Terraform Provider for ClusterControl, import the secrets from the terraform.tfvars and configure the provider for use across the environment. 

Run terraform init to initialize the directory and install the provider. You can also verify the correctness of the configuration with terraform validate command.

With your ClusterControl Terraform provider configuration validated, you can now deploy database clusters. 

Deploying database clusters using Terraform for ClusterControl

To deploy a database cluster with the ClusterControl Terraform provider, you use the clustercontrol_db_cluster resource. The following example clustercontrol_db_cluster resource schema deploys a PostgreSQL cluster:

resource "clustercontrol_db_cluster" "test-postgresql" {

    db_cluster_create = true
    db_cluster_name = "PostgreSQL Cluster"
    db_cluster_type = "pg-replication"     
    db_version = "15"
    db_vendor = "postgresql"
    db_admin_user_password = var.cc_api_user_password
    db_host {
        hostname = "test-primary"
    }
    db_host {
        hostname = "test-replica"
    }
    ssh_key_file = "/root/.ssh/id_rsa"
    ssh_user = "root" 
    db_deploy_agents = true
    disable_firewall = true
    disable_selinux = true
    db_install_software = true    
    
}

In the above resource:

  • The db_cluster_type, db_vendor and db_version attributes define the type of cluster, vendor and version respectively.  
  • db_cluster_create tells the provider to create a new cluster. If you are importing an already existing cluster, you will use the `db_cluster_import` attribute.
  • The db_host blocks are the list of nodes in the cluster. The hostname of the DB node can also be its IP address as well. You can configure your db_host node configurations with its nested schema.
  • ClusterControl uses passwordless SSH to deploy and manage the database cluster automatically. The ssh_user attribute holds the SSH user ClusterControl will use to SSH to the DB nodes from the ClusterControl node, and ssh_key_file the path to the private key file for the Sudo user on the ClusterControl node.
  • The db_deploy_agents attribute set to “true”, automatically deploys prometheus and other relevant agents for observability after setting up the initial database cluster.
  • If the nodes are new without any database package pre-installed, this resource has the db_install_software attribute set to “true”. This tell ClusterControl to install all the database packages it needs from their respective repos, and disable_firewall and disable_selinux attributes disables security configurations on the node OS to enable ClusterControl download and installs the DB packages.

To apply the above resource run the following commands:

$ terraform plan // to preview the configuration changes
$ terraform apply // to apply the configuration 

After a few minutes, the cluster creation will complete and you should see a final output as below.

clustercontrol_db_cluster.test-postgresql: Creation complete after 9m20s [id=1]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

You can verify that the cluster is using the ClusterControl GUI or the ClusterControl CLI (s9s-tools) which both come with your ClusterControl installation.

For more examples, see the examples documentation on deploying other types of database clusters – MySQL/MariaDB replication or Galera with ProxySQL, MongoDB replica set and/or sharded, Redis Sentinel, etc. Navigate to the docs folder for generated documentation on the terraform provider plugin for ClusterControl.

Destroying a deployed database cluster

After navigating to the appropriate directory which was used to deploy a cluster:

terraform destroy

Articles

Was this article helpful to you? Yes No