Table of Contents
Provisions and manages virtualization host to be used to host database containers.
The supported virtualization platforms are LXC/LXD and Amazon Web Service EC2 instances. Docker is not supported at the moment.
LXC Containers
Handling lxc containers is a new feature added to the CMON Controller and the s9s command-line tool. The basic functionality is available and tested, containers can be created, started, stopped, deleted, and even creating containers on the fly while installing clusters or cluster nodes is possible.
For the lxc containers, one needs a container server, a computer that has the lxc software installed and configured, and of course needs a proper account to access the container server from the CMON Controller. One can set up an lxc container server in two easy and one not so easy steps:
- Install a Linux server and set it up so that the root user can ssh in from the Cmon Controller with a key, without a password. Creating such access for the superuser is of course not the only way, it is just the easiest.
- Register the server as a container server on the CMON Controller by issuing the
s9s server --register --servers="lxc://IP_ADDRESS"
command. This will install the necessary software and register the server as a content server to be used later. - The hard part is the network configuration on the container server. Most of the distributions by default have a network configuration that provides a local (host only) IP address for the newly created containers. In order to provide a public IP address for the containers, the container server must have some sort of bridging or NAT configured.
A possible way to configure the network for public IP is described in this blog, Converting eth0 to br0 and getting all your LXC or LXD onto your LAN.
CMON-Cloud Virtualization
The cmon-cloud containers are an experimental virtualization backend currently added to the CMON Controller as a brand new feature.
Usage
s9s server {command} {options}
Command
Name, shorthand | Description |
---|---|
−−add-acl |
Adds a new ACL entry to the server or modifies an existing ACL entry. |
−−create |
Creates a new server. If this option is provided the controller will use SSH to discover the server and install the necessary software packages and modify the configuration if needed so that the server can host containers. |
−−get-acl |
List the ACL of a server. |
−−list |
List the registered servers. |
−−list-disks |
List disks found in one or more servers. |
−−list-images |
List the images available on one or more servers. With the --long command-line option a more detailed list is available. |
−−list-memory |
List memory modules from one or more servers. |
−−list-nics |
List network controllers from one or more servers. |
−−list-partitions |
List partitions from multiple servers. |
−−list-processors |
List processors from one or more servers. |
−−list-regions |
Prints the list of regions the server(s) support together with some important information (e.g. if the controller has credentials to use those regions or not). |
−−list-subnets |
List all the subnets that exist on one or more servers. |
−−list-templates |
Lists the supported templates. Various virtualization technologies handle templates differently, some even use different terminology (for example “size” is one such synonym). In the case of LXC, any stopped container can be used as a template for new containers to be created. |
−−register |
Registers an existing container server. If this command-line option is provided the controller will register the server to be used as a container server later. No software packages are installed or configurations changed. |
−−start |
Boot up a server. This option will try to start up a server that is physically turned off (using e.g. the wake-on-LAN feature). |
−−stat |
Prints details about one or more servers. |
−−stop |
Shuts down and power off a server. When this command-line option is provided the controller will run the shutdown program on the server. |
−−unregister |
Unregisters a container server, and simply removes it from the controller. |
Options
Name, shorthand | Description |
---|---|
−−log |
If the s9s application created a job and this command-line option is provided it will wait until the job is executed. While waiting the job logs will be shown unless the silent mode is set. |
−−recurrence=CRONTABSTRING |
This option can be used to create recurring jobs, jobs that are repeated over and over again until they are manually deleted. Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The option’s argument is a crontab-style string defining the recurrence of the job. See Crontab. |
−−schedule=DATETIME |
The job will not be executed now but it is scheduled to execute later. The DateTime string is sent to the backend, so all the formats are supported by the controller. |
−−timeout=SECONDS |
Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, the controller might ignore this value. |
−−wait |
If the application created a job (e.g. to create a new cluster) and this command-line option is provided the s9s program will wait until the job is executed. While waiting a progress bar will be shown unless the silent mode is set. |
−−acl=ACLSTRING |
The ACL entry to set. |
−−os-key-file=PATH |
The SSH key file to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file , --os-password , --os-user ) the controller will try to log in with the default settings. |
−−os-password=PASSWORD |
The SSH password to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file , --os-password , --os-user ) the controller will try to log in with the default settings. |
−−os-user=USERNAME |
The SSH username to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file , --os-password , --os-user ) the controller will try to log in with the default settings. |
−−refresh |
Do not use cached data, collect information. |
−−servers=LIST |
List of servers. |
Server List
Using the –list and –long command line options a detailed list of the servers can be printed. Here is an example of such a list:
$ s9s server --list --long
PRV VERSION #C OWNER GROUP NAME IP COMMENT
lxc 2.0.8 5 pipas testgroup core1 192.168.0.4 Up and running.
lxc 2.0.8 5 pipas testgroup storage01 192.168.0.17 Up and running.
Total: 2 server(s)
The list contains the following fields:
Field | Description |
---|---|
PRV | The name of the provider software. The software that will handle containers or virtual machines on the server. One server can have only one such system, but multiple servers can be registered using one physical computer. |
VERSION | The version of the provider software. |
#C | The number of containers/virtual machines currently hosted by the server. |
OWNER | The owner of the server object. |
GROUP | The group owner of the server object. |
NAME | The hostname of the server. |
IP | The IP address of the server. |
COMMENT | A human-readable description of the server and its state. |
Examples
Register a virtualization host:
$ s9s server --register --servers=lxc://storage01
Check the list of virtualization hosts:
$ s9s server --list --long
Create a virtualization server with an operating system username and password to be used to host containers. The controller will try to access the server using the specified credentials:
$ s9s server \
--create \
--os-user=testuser \
--os-password=passw0rd \
--servers=lxc://192.168.0.250 \
--log