1. Home
  2. Docs
  3. ClusterControl
  4. Installation
  5. Automatic Installation
  6. Puppet Module

Puppet Module

If you are automating your infrastructure using Puppet, we have created a module for this purpose and it is available at Puppet Forge. Installing the module is as easy as:

$ puppet module install severalnines-clustercontrol

Requirements

If you haven’t changed the default $modulepath, this module will be installed under /etc/puppet/modules/clustercontrol on your Puppet master host. This module requires the following criteria to be met:

  • The node for ClusterControl must be clean and solely for ClusterControl.
  • ClusterControl node must have an internet connection during the deployment. After the deployment completes, ClusterControl does not need internet access to work.

Pre-installation

ClusterControl requires proper SSH key configuration and a ClusterControl API token. Use the helper script located at $modulepath/clustercontrol/files/s9s_helper.sh to generate them.

Generate SSH key to be used by ClusterControl to manage your database nodes. Run the following command in Puppet master:

$ bash /etc/puppet/modules/clustercontrol/files/s9s_helper.sh --generate-key

Then, generate an API token:

$ bash /etc/puppet/modules/clustercontrol/files/s9s_helper.sh --generate-token
b7e515255db703c659677a66c4a17952515dbaf5
Attention

These two steps are mandatory and just need to run once (unless if you want to intentionally regenerate them). The first command will generate an RSA key (if not exists) to be used by the module and the key must exist in the Puppet master module’s directory before the deployment begins.

Installation

Specify the generated token in the node definition similar to the example below.

Example hosts:

clustercontrol.local    192.168.1.10
galera1.local           192.168.1.11
galera2.local           192.168.1.12
galera3.local           192.168.1.13

Example node definition:

# ClusterControl host
node "clustercontrol.local" {
  class { 'clustercontrol':
    is_controller => true,
    ssh_user => root,
    api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'
  }
}

After the deployment completes, open ClusterControl UI at https://ClusterControl_host/clustercontrol and create a default admin login. You can now start to add an existing database node/cluster or deploy a new one. Ensure that passwordless SSH is configured properly from the ClusterControl node to all database nodes beforehand.

To setup passwordless SSH on target database nodes, you can use the following definition:

# Monitored DB hosts
node "galera1.local", "galera2.local", "galera3.local" {
  class {'clustercontrol':
    is_controller => false,
    ssh_user => root,
    mysql_root_password => 'r00tpassword',
    clustercontrol_host => '192.168.1.10'
  }
}

You can either instruct the agent to pull the configuration from the Puppet master and apply it immediately:

$ puppet agent -t

Or, wait for the Puppet agent service to apply the catalog automatically (depending on the runinterval value, the default is 30 minutes). Once completed, open the ClusterControl UI page at http://ClusterControl_host/clustercontrol and create the default admin user and password.

For more examples of deployments using Puppet, please refer to Puppet Module for ClusterControl – Adding Management and Monitoring to your Existing Database Clusters. For more info on configuration options, please refer to the ClusterControl Puppet Module page.

Was this article helpful to you? Yes No 1