EduhG / koko-devops

Sample repo demonstrating infrastructure provisioning using a CI/CD pipeline

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Koko Devops

Sample devops project demonstrating setting up infrustuture automatically, CI/CD pipelines and a minimal flask app hosted on AWS.

Technology Stack

Getting Started

Download the source code of this project from Github

git clone git@github.com:EduhG/koko-devops.git

or

git clone https://github.com/EduhG/koko-devops.git

Prerequisites

To get started you need to have terraform installed locally. You can install it quicky by following this tutorial. Install Terraform

Acess to an AWS account. Create an IAM user with access to ecr and ec2

Configuration

Setup

Configure Infrastructure

First lets create the neccessary resources required to run the application. We will be using terrafor to do this.

Create an ssh access key. This is optional as you can use the default access key. Remember not to overwrite your current one located in ~/.ssh/id_rsa

ssh-keygen -t rsa -b 4096

Create a terraform.tfvars file and file with the following details.

touch devops/infra/terraform.tfvars
public_key_path       = "LOCATION_OF_PUBLIC_SSH_KEY"
private_key_path      = "LOCATION_OF_PRIVATE_SSH_KEY"
aws_access_key_id     = "ACCESS_KEY_ID_CREATED_ABOVE"
aws_secret_access_key = "SECRET_ACCESS_KEY_CREATED_ABOVE"
cicd_instance_type    = "t3a.small"
master_instance_type  = "t3a.medium"
worker_instance_type  = "t3a.micro"
workers_count         = 0

Create the required resources. Use a helper command from the Makefile

make apply

This will create two EC2 servers along with required security groups.

CI/CD Server - Will run all our ci/cd pipelines here. It is configured using jenkins and ansible.

Kubernetes Master Node - This will run our kubernetes cluster

Once the infrusture has been created, we will now configure jenkins. To do this, first, ssh into the ci/cd server

make ssh-cicd

Copy the jenkins master password

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

If the location above is unavaiable give it a few more minutes.

Configure Jenkins

Open your brouser and visit the following url http://CICD_SERVER_IP:8080. On this page paste the jenkins master password we copied from the last step. Complete the initial jenkins setup of creating an admin user and installing the suggested plugins.

After installing, the suggested plugins, we will install 4 more plugins that will be used to run our CI/CD pipeline. In your jenkins instance go to Manage plugins and install the following plugins CloudBees AWS Credentials, Amazon ECR, Docker Pipeline and Multibranch Scan Webhook Trigger. You can search for these plugins in the Available tab. After they're installed, they appear in the Installed tab.

Next we need to configure some credentials that jenkins will use to run our pipelines. In your Jenkins instance, go to Manage Jenkins, then Manage Credentials, then Jenkins Store, then Global Credentials (unrestricted), and finally Add Credentials. Fill in the following fields, leaving everything else as default:

  • Kind - AWS credentials
  • ID - aws-credentials
  • Access Key ID - Access Key ID from earlier
  • Secret Access Key - Secret Access Key from earlier

Click OK to save.

Update Jenkinsfile

Next update jenkinsfile if you used a different ID when creating AWS Credentials change aws-credentials in the jenkinsfile to the one you created.

Update the docker registry url also to match the one that was created when you ran terraform init, you can get it from the outputs section.

Next push this to your github repo.

Create Jenkins Pipeline

Create a jenkins multibranch pipeline. Navigate to Dashboard then Create Item, Enter your prefered name, then choose Multibranch Pipeline. Finally click OK.

Next fill in the form. Display Name can be same name as the github-repo name.

Next under Branch Sources select Git, then add the url to your github repository for this project.

Next configure Behaviors. Leave the defaults, then add Filter by name (with wildcards). In the include field add main feature/*. This will make the pipeline run whenever there are changes to the two branches; main and feature/*.

Next add Check out to matching local branch, Clean before checkout and Clean after checkout

Next under Scan Repository Triggers select Scan by webhook. Set the trigger token(WEBHOOK_TOKEN)

Finally click Save.

This will scan the github repository configuring pipelines for the two branches we specified.

The main branch build will configure, install kubenetes cluster and finally launch our application within the cluster.

Automatically trigger pipeline

Finally to automatically trigger the build pipeline when an update happens on the github repository, configure github webhook.

In your github repository navigate to Repository Webhooks, then create a webhook with the url http://CICD_SERVER_IP:8080//multibranch-webhook-trigger/invoke?token=WEBHOOK_TOKEN. This will automatically trigger our pipeline when a change happens in github repo.

Configure Master Node

Since we are using a single instance for our kubernetes cluster we need to configure kubernetes to allow pod deployment on the master node.

SSH into the master node

make ssh-master

You can skip this step and check the alternative method below.

Once logged in update kubernetes using

sudo kubectl taint node --all node-role.kubernetes.io/control-plane-

sudo kubectl taint node --all node-role.kubernetes.io/master-

We are using sudo because $HOME/.kube/config is configured in the root user.

After a few minutes we should be able to access the deployed app in the browser with this url http://MASTER_SERVER_IP:30500

Alternatively, you can increament the count option of workers in $(pwd)/devops/infra/terraform.tfvars. This will create extra ec2 instances.

Then downgrade the ec2 instance for the master cluster to t3a.micro since we wont be deploying our aplication on the master.

Testing the API

After you visit the root url(http://MASTER_SERVER_IP:30500/), you can test timeout by adding a runtime query param to the url. For example http://MASTER_SERVER_IP:30500/?runtime=35

This will make the endpoint to timeout after 30 seconds

Tearing down resources

The deployed application and all its resources can be destroyed using terraform with the following command

terraform destroy

Approach

To easily ccreate and destroy resources, terraform is the defactor tool. Its easy to use and is cloud angostic.

To configure the created ec2 instances, i've used ansible as it handles this more efficiently since it fails if an error happens.

Some configuration has been done using cloud init, especially for the CI/CD server majorly because i've assumed this will be running from scratch. The only thing the user needs is terraform to be installed.

For the CI/CD pipelines i've used jenkins as its easy to install, setup and configure. The wide range of plugins make it easy to intergrate with any other provider.

Finally the app is deployed in a kubernetes cluster that id fully configured by the CI/CD pipeline.

Monitoring

To quickly understand whats going on in production, we have intergrated Elastic Search, Logstash and Kibana with Filebeat. Filebeat enables us to aggregate all logs from the running containers and send them to logstash.

We've used kibana to visualize the logs and to quickly monitor for timeout errors.

To view the Kibana Dashboard head over to http://MASTER_SERVER_IP:30560/

Improvements

Implement monitoring and alerts. This can be done using Prometheus.

Increase the number of node in the cluster. This will make the entire process automated as the user wont need to ssh into the master node to mark it as tainted.

Lastly, configuring jenkins can be quite tedious, I can probably make this easier(account setup and plugin installation) by using docker and a custom jenkins build. We can also automate some aspects using JCasC

About

Sample repo demonstrating infrastructure provisioning using a CI/CD pipeline


Languages

Language:HCL 39.0%Language:Mustache 26.1%Language:Python 14.9%Language:Shell 11.2%Language:Makefile 4.9%Language:Dockerfile 4.0%