lordoftheflies / docker

ClusterControl docker image

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ClusterControl Docker Image

Table of Contents

  1. Supported Tags
  2. Overview
  3. Image Description
  4. Run Container
  5. Environment Variables
  6. Service Management
  7. LDAP
  8. Examples
  9. Development
  10. Disclaimer

Supported Tags

Overview

ClusterControl is a management and automation software for database clusters. It helps deploy, monitor, manage and scale your database cluster. This Docker image comes with ClusterControl installed and configured with all of its components so you can immediately use it to deploy new set of database servers/clusters or manage existing database servers/clusters.

Supported database servers/clusters:

  • Galera Cluster for MySQL
  • Percona XtraDB Cluster
  • MariaDB Galera Cluster
  • MySQL Replication
  • MySQL single instance
  • MySQL Cluster (NDB)
  • MongoDB sharded cluster
  • MongoDB replica set
  • PostgreSQL (single instance/streaming replication)

More details at Severalnines website.

Image Description

To pull ClusterControl images, simply:

$ docker pull severalnines/clustercontrol

The image is based on CentOS 7 with Apache 2.4, which consists of ClusterControl packages and prerequisite components:

  • ClusterControl controller, UI, cloud, notification and web-ssh packages installed via Severalnines repository.
  • MySQL, CMON database, cmon user grant and dcps database for ClusterControl UI.
  • Apache, file and directory permission for ClusterControl UI with SSL installed.
  • SSH key for ClusterControl usage.

Run Container

To run a ClusterControl container, the simplest command would be:

$ docker run -d severalnines/clustercontrol

However, for production use, users are advised to run with sticky IP address/hostname and persistent volumes to survive across restarts, upgrades and rescheduling, as shown below:

# Create a Docker network
$ docker network create --subnet=192.168.10.0/24 db-cluster

# Start the container
$ docker run -d --name clustercontrol \
--network db-cluster \
--ip 192.168.10.10 \
-h clustercontrol \
-p 5000:80 \
-p 5001:443 \
-v /storage/clustercontrol/cmon.d:/etc/cmon.d \
-v /storage/clustercontrol/datadir:/var/lib/mysql \
-v /storage/clustercontrol/sshkey:/root/.ssh \
-v /storage/clustercontrol/cmonlib:/var/lib/cmon \
-v /storage/clustercontrol/backups:/root/backups \
severalnines/clustercontrol

The recommended persistent volumes are:

  • /etc/cmon.d - ClusterControl configuration files.
  • /var/lib/mysql - MySQL datadir to host cmon and dcps database.
  • /root/.ssh - SSH private and public keys.
  • /var/lib/cmon - ClusterControl internal files.
  • /root/backups - Default backup directory only if ClusterControl is the backup destination

Alternatively, if you would like to enable agent-based monitoring via Prometheus, you have to make the following path persistent as well:

  • /var/lib/prometheus - Prometheus data directory.
  • /etc/prometheus - Prometheus configuration directory.

Therefore, the run command for agent-based monitoring via Prometheus would be:

$ docker run -d --name clustercontrol \
--network db-cluster \
--ip 192.168.10.10 \
-h clustercontrol \
-p 5000:80 \
-p 5001:443 \
-v /storage/clustercontrol/cmon.d:/etc/cmon.d \
-v /storage/clustercontrol/datadir:/var/lib/mysql \
-v /storage/clustercontrol/sshkey:/root/.ssh \
-v /storage/clustercontrol/cmonlib:/var/lib/cmon \
-v /storage/clustercontrol/backups:/root/backups \
-v /storage/clustercontrol/prom_data:/var/lib/prometheus \
-v /storage/clustercontrol/prom_conf:/etc/prometheus \
severalnines/clustercontrol

After a moment, you should able to access the ClusterControl Web UI at {host's IP address}:{host's port}, for example:

We have built a complement image called centos-ssh to simplify database deployment with ClusterControl. It supports automatic deployment (Galera Cluster) or it can also be used as a base image for database containers (all cluster types are supported). Details at here.

Environment Variables

  • CMON_PASSWORD={string}

    • MySQL password for user 'cmon'. Default to 'cmon'. Use docker secret is recommended.
    • Example: CMON_PASSWORD=cmonP4s5
  • MYSQL_ROOT_PASSWORD={string}

    • MySQL root password for the ClusterControl container. Default to 'password'. Use docker secret is recommended.
    • Example: MYSQL_ROOT_PASSWORD=MyPassW0rd

Service Management

Starting from version 1.4.2, ClusterControl requires a number of processes to be running:

  • sshd - SSH daemon. The main communication channel.
  • mysqld - MySQL backend runs on Percona Server 5.6.
  • httpd - Web server running on Apache 2.4.
  • cmon - ClusterControl backend daemon. The brain of ClusterControl. It depends on mysqld and sshd.
  • cmon-ssh - ClusterControl web-based SSH daemon, which depends on cmon and httpd.
  • cmon-events - ClusterControl notifications daemon, which depends on cmon and httpd.
  • cmon-cloud - ClusterControl cloud integration daemon, which depends on cmon and httpd.
  • cc-auto-deployment - ClusterControl automatic deployment script, running as a background process, which depends on cmon.

These processes are being controlled by Supervisord, a process control system. To manage a process, one would use supervisorctl client as shown in the following example:

[root@physical-host]$ docker exec -it clustercontrol /bin/bash
[root@clustercontrol /]# supervisorctl
cc-auto-deployment               RUNNING   pid 570, uptime 2 days, 19:11:54
cmon                             RUNNING   pid 573, uptime 2 days, 19:11:54
cmon-events                      RUNNING   pid 576, uptime 2 days, 19:11:54
cmon-ssh                         RUNNING   pid 575, uptime 2 days, 19:11:54
httpd                            RUNNING   pid 571, uptime 2 days, 19:11:54
mysqld                           RUNNING   pid 577, uptime 2 days, 19:11:54
sshd                             RUNNING   pid 572, uptime 2 days, 19:11:54
supervisor> restart cmon
cmon: stopped
cmon: started
supervisor> status cmon
cmon                             RUNNING   pid 2838, uptime 0:11:12
supervisor>

In some cases, you might need to restart the related service after a manual upgrade or configuration tweaking. Details on the start commands can be found inside conf/supervisord.conf.

LDAP

Starting from version 1.8.2, ClusterControl introduces a new user management system, as described here. For LDAP, the configuration will be stored inside /etc/cmon-ldap.cnf. Since the Docker volume is not configured for this path, to make it persistent, the configuration file has to be moved into the /etc/cmon.d/ directory. The entrypoint script has been added a logic to handle file copying to /etc/cmon.d/cmon-ldap.cnf and symlink it to /etc/cmon-ldap.cnf.

Therefore, whenever you have configured the LDAP Settings (ClusterControl -> User Management -> LDAP Settings) and you want to permanently save it, you should restart the container by using the following command (to basically trigger the entrypoint script):

$ docker restart clustercontrol

Examples

Development

Please report bugs, improvements or suggestions via our support channel: https://support.severalnines.com

If you have any questions, you are welcome to get in touch via our contact us page or email us at info@severalnines.com.

Disclaimer

Although Severalnines offers ClusterCluster as a Docker image, it is not intended for production usage. ClusterControl product direction is never intended to run on a container environment due to its internal logic and system design. We are maintaining the Docker image on a best-effort basis, and it is not part of the product development projection and pipeline.

About

ClusterControl docker image


Languages

Language:Shell 89.3%Language:Dockerfile 10.7%