TomSpencerLondon / Devops

Practice with Devops Coach

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Devops

https://www.coachdevops.com/search?q=docker+artifactory

Welcome to DevOps Coaching program. Please go through the below links to understand the following:

  1. Coaching model
  2. Pre-requisites before you attend classes: https://www.coachdevops.com/2019/01/pre-requisites-before-starting-devops.html
  3. Learn the basics of Agile, devops, etc before you start attending the sessions.
  4. Why are companies adopting the DevOps model?
  5. What does it take to become a successful DevOps engineer? https://www.coachdevops.com/2021/02/top-devops-skills-for-2021-skills.html

Please click on the below link to work on pre-requisites, etc. click on the below link or copy and paste in the browser:

https://www.coachdevops.com/2019/05/welcome-package-devops-coaching.html

Nice course on terraform:

https://www.youtube.com/watch?v=lxjgxh5XzY0&list=PLJwvtUqYDmA5Kp7mHIiP5KrVkovKwh5P4

Devops Transformation

  • Understand development processes
  • Understand maturity of the organization
  • Transformation of the systems

Devops Tools

  • sonarqube
  • jenkins
  • static code analysis
  • quality gate
  • 42-50 lab exercises
  • Gunal lab assistant

image

Example

Before Cloud(on-Prem) for Online Shopping App

  • peak usage during weekends / holidays
  • Less load during rest of time
  • higher cost of procuring infrastructure
  • needs to plan ahead
  • Dedicated server team to manage the infrastructure
  • Going global takes a lot of time image

Why migrate to AWS Cloud?

https://aws.amazon.com/cloud-migration/

  • 20% infrastructure cost savings (autoscaling)
  • 66% increase in administrator productivity
  • 43% lower time to market for new features
  • 29% increase in staff focus on innovation
  • 45% fewer security-related incidents

These are some advantages of using cloud computing:

  1. Cost Savings
  2. Security
  3. Flexibility
  4. Mobility
  5. Insight
  6. Increased Collaboration
  7. Quality Control
  8. Disaster Recovery
  9. Loss Prevention
  10. Automatic Software Updates
  11. Competitive Edge
  12. Sustainability

Other advantages of cloud computing could include provisioning resources on demand (Elasticity). Better resource utilization and no upfront cost. We can stop spending money to maintain a data center and go global in minutes. There is no need of a dedicated server team to manage the infrastructure.

This video is good for cloud migration: https://www.youtube.com/watch?v=AXQ7n7rDfFE

What is the role of a devops Engineer?

As a devops engineer you will:

  • work with IT team
  • nderstand how developers work and collaborate
  • Identify all manual tasks developers are doing
  • create streamlined release processes
  • Automate software development, deployment and release management
    • Setup CICD pipeline - to automate build and deployment
    • Automate infrastrcture setup
    • Automate Test execution process
    • Provide continuous feedback

The first challenge is to build basic Linux knowledge. We would then learn scripting in groovy, ruby, python or shell. Next we would learn source control management. For instance, a client may want to migrate from Azure Devops to git: https://azure.microsoft.com/en-us/products/devops/server

After that we would learn ansible and terraform for configuration management. Next we would learn Jenkins and Azure Devops. We would then learn monitoring tools such as Prometheus, Grafana, ELK stack. Next we would learn more about Azure or AWS. At the end of the process we would learn docker, containers, kubernetes and deploy on EKS.

All the time the focus should be on global knowledge of Devops and other tools are learnt as extra. We should be consistent in learning to ensure that we practice. So the list of Devops would include:

  • Learning Linux Admin basics
  • Understand how SDLC, Agile works
  • Learn any scripting language - Groovy, Python
  • Learn Git, SCM
  • Learn CICD tools
  • Learn configuration management tools
  • Learn Containers such as Docker
  • Learn any Cloud platform such as AWS, Azure, GCP
  • Learn Kubernetes for Container orchestration tools

What is SDLC and What is Devops

DevOps is an approach to improving work in the software development lifecycle (SDLC) process.

Lab 0

For this lab we will create two virtual servers(EC2 instances) in AWS cloud. EC2 instances are virtual servers provided by AWS. We will create two EC2 instances:

Lab 1

We have now created an EC2 instance in AWS cloud. We will now setup Java, Maven and Jenkins on the EC2 instance: https://www.coachdevops.com/2023/03/install-jenkins-on-ubuntu-2204-setup.html

image

This link was useful for tomcat: https://www.coachdevops.com/2020/04/install-tomcat9-on-ubuntu-1804-setup.html

image

This is the plan going forwards: image

Roles and responsibilities of Devops Engineer

  • Set up CICD pipeline - automate build and deployment
  • Automate infrastructure setup
  • Automate Test execution process
  • Provide continuous feedback
  • Devops Engineers should also understand how developers work and collaborate

Top 10 Devops Tools

https://www.coachdevops.com/2020/04/top-10-devops-popular-tools-popular.html

  1. Terraform - # 1 Infrastructure automation tool
  2. Git - BitBucket/GitHub/Azure Repos - # 1 - SCM tool
  3. Jenkins, Maven, Master/Slave, Pipelines - scripted, declarative, multi branch - # 1 CI tool
  4. Docker - #1 Container platform
  5. Kubernetes - #1 container orchestration tool
  6. Ansible- #1 Configuration Management tool
  7. Azure DevOps – Microsoft platform for migrating applications to Azure Cloud
  8. SonarQube – #1 Code quality tool
  9. Slack - #1 Collaboration tool
  10. Nexus - #2 Binary repo manager

What is SDLC?

The Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality software. The SDLC aims to produce a high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates. These are the phases of the software development life cycle: image

Differences between Agile and Waterfall

Waterfall can be described as a linear process. The process starts with requirements gathering and ends with deployment. Agile can be described as an iterative process. Each iteration leads to gradual improvement of the product. The problem with waterfall is that the business needs to wait a long time to see the value in Waterfall. Waterfall is more rigid and businesses cannot change requireements. Product owners are not involved during the devlopment. In Agile changes are allowed even after the planning is complete. Agile is iterative development where businesses can see value through demos of the work so far. Every story in agile is tested and delivered to product owners.

Devops is about people, process and technology.

Differences between two images:

image

image

In the first image there is no automation. There are also silos between development and operations. There is no feedback loop. There is collaboration with slack notifications. In the second diagram there are checks before pushing the code including sonarqube, jacoco and junit. There is security through artifactory and security scanning. The testing takes place during the deployment stage and there is a difference between deployment and release. In the second system we are able to fail fast and trouble shoot quickly.

In the first time diagram, we see agile development with stories. There are:

  • manual builds
  • manual tests on the machine
  • manual deployments (time consuming)
  • manual code coverage
  • no feedback to production teams
  • no binary repo manager
  • manual infrastructure setup

In this example, developers are siloed and are not collaborating effectively and are unable to find out what they have done wrong. In the first place, it is important to define the deployment plan. Take the code from inception to release. The main difference is during the CI/CD part of the pipeline. Each time jenkins checks the code for code quality issues. We also integrated with notifications such as slack. We also have a binary repo manager. We have a test environment and a production environment. The model followed is build once and deploy anywhere. There are quality gates for each starge to ensure that quality is upheld. Every time we deploy into the QA and UAT environments. The Jar file is stored on Artifactory and there is no manual build. The build is completed using Infrastructure as Code with Terraform and Ansible. Devops is a practice that is followed by deployment teams. Agile is a software development methodology. Agile closes the gap between business and developers and devops closes the gap between developers and operations.

Continuous Integration - Find bugs early in the software development stage Continuous Delivery - Deploy code to production environment (build once and deploy anywhere) - fixed release by May 25 Continuous Deployment - Everything is automated - as soon as developer makes change it is deployed to production

What does Faster to market mean?

Agile with Devops can help to reduce the time to market. It helps move the product to the market place quickly. Build and packaging is when devops is most useful. With a brand new product how is devops useful? 80% of application is already in production. Difficult to automate the process. Many not 50% unit test coverage. It is easier for devops to implement a green field project.

Why Devops?

  • Business value
    • faster time to market
    • continuous software delivery
    • better quality due to automation
    • improved performance
    • reduced outages - shorter lead times
    • better products
  • Technical value:
    • communication between teams
    • faster resolution of problems
    • increased productivity - more time available to add value (rather than fix / maintain)
    • Infrastructure as code - environment is provisioned through code rather than manually

Devops paradigms

  1. Source code management
  2. Continuous integration
  3. Continuous delivery
  4. Infrastructure As Code
  5. Continuous Deployment
  6. Code Quality tools integration
  7. Monitoring
  8. Microservices and Containerization
  9. Container Orchestration

image

Being good at just the first three can help get a job: git, jenkins, terraform.

These are the tools we will learn:

  • Git, Github, Bitbucket, Azure Repos - Source code management
  • Maven - build tools for java apps
  • MSBuild - build tool for .NET
  • Jenkins, Azure Devops - Continuous Integration tool
  • SonarQube, Jacoco, Corbetura - Code quality tools
  • Jira, Azure Board - project management tool
  • Nexus, Artifactory - Binary repo manager
  • Terraform, Ansible, Puppet, Chef - infrastructure automation tools
  • Slack,, microsoft teams - collaboration tool
  • Docker, Kubernetes - containerization and orchestration tools
  • AWS, Azure, GCP - Cloud platforms
  • Scripting - Groovy (pipelines), Playbooks YAML (Ansible), JSON (Terraform), Manifests (Puppet)
  • AWS ECR, Azure Container Registry, DockerHub, Nexus - Container registries
  • Prometheus and Grafana - Monitoring tools (New)

image

Sonarlint plugin locally runs the Sonarqube analysis on your code. It is a static code analysis tool. It is a plugin for eclipse and intellij. This is quite useful for the rules on sonarqube: https://rules.sonarsource.com/java/RSPEC-3252

image

Fortify is a security tool. Snyk is another vulnerability scanner. It is a tool that scans the code for vulnerabilities. As a devops engineer the CyberSecurity team would be responsible for the product. Dynatrace is the responsibility of the monitoring team. The Devops team is responsible for invoking the applications.

Tools integration

How would we move from no automation to full cicd. First we need to set up an overview of the process. We would then set up Jenkins and SonarQube. We would then set up Nexus to set up Artifactory. We would then slowly scale up. We start with CI / CD, container management.

Prospective employers will ask for you to show your github repositories. Project work is important. We will take a problem and address that problem. When set up git bash.

Lab 2 - Create Java Web App using MAven and Setup Java WebApp in Github repo

This link is useful for scm: https://www.cidevops.com/2020/03/what-is-source-code-management-what-is.html

This link is useful for setting up a Java project in Github: https://www.coachdevops.com/2019/05/setup-repo-and-create-java-project-in.html

This is the github account that I have setup: https://github.com/TomSpencerLondon/MyApplicationRepo

image

Lab 3 - CICD - Automate Build and Deployment of Java Web App using Jenkins Free Style job

We can use the steps for configuring Jenkins to automate build and deployment for the Java WebApp we already set up in GitHub using lab 2 earlier.

This link is useful for automating Builds and Deployments using Jenkins: https://www.coachdevops.com/2019/02/create-build-job-in-jenkins-how-to.html

Lab 4 - How to configure Webhooks in GitHub to trigger builds instantly in Jenkins?

This is to configure Webhooks in GitHub which will trigger Jenkins jobs immediately for every code check-in by the developers. This link is useful for configuring Webhooks in GitHub: https://www.cidevops.com/2019/02/how-to-create-webhooks-in-github-and.html

Traditional old flow of software development

  1. Manual Builds
  2. Manual unit tests execution
  3. Manual deployments
  4. Manual code quality checks
  5. Manual code coverage
  6. No feedback to prod teams
  7. No binary repo management
  8. Manual Infrastructure setup

Milestones

  1. Automated builds
  2. Automated unit tests execution
  3. Automated code quality checks
  4. Automated security checks
  5. Automated code coverage
  6. Automated deployments
  7. Automated feedback to prod teams
  8. Automated binary repo management
  9. Automated Infrastructure setup

Jenkins is the best tool for CI / CD. Jenkins is a CI / CD tool. It is a server that runs in the background. It is a java application. In lab 0 we created two ec2 instances. One is the Jenkins server and the other is a tomcat server.

This is a reminder of the CI/CD process image

We will automate build, deployment and code coverage with jacoco. We will use a maven structure. Maven gives us a project directory structure. We will use a maven project structure. We will use a maven project structure. This link is useful for maven project structure: https://www.cidevops.com/2020/03/what-is-maven-why-we-need-maven.html

image

image

There are four ways of triggering build jobs in Jenkins: image

Lab 5 - How to make code changes to trigger Jenkins builds instantly

In this exercise we want to make a code change to ensure Jenkins has started automated builds/deployments instantly. https://www.cidevops.com/2020/02/how-to-push-code-change-into-github.html

We are using this repository for connection with jenkins: https://github.com/TomSpencerLondon/MyApplicationRepo

We can then refresh the browser and click on Source to see the code changes you made in our git bash window. Now after making this code change, if we have web hooks configured correctly, it should have triggered build in Jenkins instantly.

We just changed an exclamation mark on Howdy Folks but this has been reflected on our application which we have deployed to the Tomcat instance:

image

Lab 6 - Code Quality - How to setup SonarQube and Integrate with Jenkins

SonarQube is a static code quality/analysis tool which will scan application source code and find defects/issues in the code. This link is useful for setting up sonarqube using docker: https://www.coachdevops.com/2021/12/install-sonarqube-using-docker-install.html All the code is directly added to the sonarqube server. We can then see the code quality issues. I set the username and password as: admin password

The sonarqube ui is up and running: image We now need to integrate sonarqube with jenkins. This link is useful for sonarqube Jenkins integration: https://www.coachdevops.com/2020/04/how-to-integrate-sonarqube-with-jenkins.html

image

I also added the Sonarqube credentials: image

When we push to github, we can see the sonarqube analysis: image

Lab 7 - Nexus 3 Setup using Docker Compose and Integrate Nexus with Jenkins

Nexus is a Binary repository manager that is used for storing binaries (build output ). We will eventually integrate Nexus with Jenkins for uploading binaries files. This link is useful to explain why binary repository managers are needed: https://jfrog.com/whitepaper/devops-8-reasons-for-devops-to-use-a-binary-repository-manager/

We can either use nexus or artifactory as binary repo managers. Here we will use nexus. In order to setup nexus we need to do the following:

We first add the EC2 instance:

  • Ubuntu EC2 up and running with at least t2.medium(4GB RAM), 2GB will not work
  • Port 8081, 8085 is opened in security firewall rule
  • instance should have docker and docker-compose installed

These are the commands for setting up docker:

sudo hostnamectl set-hostname Nexus
sudo apt-get update
sudo apt-get install docker-compose -y
sudo usermod -aG docker $USER
sudo vi docker-compose.yml 

This is the docker-compose.yml file:

version: "3"
services:
  nexus:
    image: sonatype/nexus3
    restart: always
    volumes:
      - "nexus-data:/sonatype-work"
    ports:
      - "8081:8081"
      - "8085:8085"

volumes:
  nexus-data: {}

We then run the following commands:

sudo docker-compose up -d 
sudo docker-compose logs --follow

We need to add the nexus password for authentication when we integrate with jenkins:

sudo docker exec -it ubuntu_nexus_1 cat /nexus-data/admin.password

This is the set up with Nexus:

image

This link is useful for integration with Jenkins: https://www.cidevops.com/2018/06/jenkins-nexus-integration-how-to.html

This is the nexus configuration: image

We also have to add a nexus credential in Jenkins: image

Bonus Labs

These bonus labs are for ensuring that we can duplicate the deploy steps that we followed above on different environments.

Bonus Lab Exercise 1 - Create Java Web App using Maven & setup in Bitbucket

This link is a useful overview of version control systems: https://www.cidevops.com/2020/03/what-is-source-code-management-what-is.html

We need to set up our bitbucket account at bitbucket.org. This link is useful for setting up ssh keys in bitbucket: https://www.coachdevops.com/2019/09/how-to-setup-ssh-keys-in-bitbucket-and.html

Bonus Lab Exercise 2 - CICD - Automate Build and Deployments of Java WebApp from BitBucket using Jenkins

We have now added our WebApp to Bitbucket. I have also started the Jenkins and Tomcat EC2 instances on AWS.

We then add the configuration to Jenkins for pulling from the Bitbucket repository: image

For the above configuration I used the private key from the Jenkins EC2 instance and then added the public key to Bitbucket.

Bonus Lab Exercise 3 - How to configure webhooks in BitBucket to trigger Jenkins jobs instantly?

This link is useful on webhooks with Bitbucket: https://www.coachdevops.com/2020/06/how-to-configure-webhooks-in-bitbucket.html

Webhooks are used to trigger Jenkins jobs when a change is made to the Bitbucket repository. This link is useful for setting up webhooks in Bitbucket:

image

Lab Exercise 8 - How to send push notifications to Slack from Jenkins (Continuous Feedback)

In this section we will look at integrating Slack with Jenkins for sending Push notifications after build in Jenkins. Slack is a collaboration tool which agile teams use to communicate and collaborate. Slack can be integrated with Jenkins.

This link is useful for setting up Slack with Jenkins: https://www.cidevops.com/2018/05/jenkins-slack-integration-jenkins-push.html

image

Lab Exercise 9 - How to Trigger a Jenkins build Job from a Slack channel

We will now look at invoking a Jenkins Job from a Slack Channel. This link is useful for setting up slack jenkins integration with build command: https://www.coachdevops.com/2020/04/trigger-jenkins-job-from-slack-how-to.html

image

Lab Exercise 10 - Configure Jenkins Build Agents for Distributing loads

We will now work on using a build agent to help Jenkins achieve distributed builds. This link is useful for setting up build agents: https://www.coachdevops.com/2021/06/jenkins-build-agent-setup-how-to-setup.html

The advantages of the master build agent model are:

  • distributed builds
  • faster throughput
  • quicker feedback
  • scalable architecture

image

Jenkins Controller

The main Jenkins server is the master. The Master's job is to handle:

  • scheduling build jobs
  • dispatching builds to the slaves for the actual execution
  • Monitoring the slaves - possibly taking them online and offline as required
  • Recording and presenting the build results
  • A master instance of Jenkins can also execute build jobs directly

Jenkins Agent

A slave is a Java executable that runs on a remote machine. These are the characteristics of Jenkins slaves:

  • Slaves hear requests from the master for build executors
  • Slaves can run on a variety of operating systems
  • The slave follows the master commands. For Jenkins this involves executing build jobs dispatched by the Master
  • We can configure a project to always run on a particular Slave machine or a particular type of Slave machine or simply let Jenkins pick the next available Slave.

The Jenkins Master uses SSH keys to communicate with the slave. We need to create ssh keys in Jenkins agent by executing:

ssh-keygen

We then add the public key to the authorized_keys file in .ssh and add the private key to the Jenkins credentials. We then use the credentials to create the agent: image

The agent is then able to authorize the controller connection:

image

When I had an issue with ssh for ec2 I used this command:

sudo chown -R ubuntu:ubuntu .ssh

This may have been related to some issues I was having with ssh keys for controller and agent connection. This link is useful for setting up ssh keys for ec2: https://askubuntu.com/questions/311558/ssh-permission-denied-publickey

Sometimes the issue comes from permissions and ownership. For instance, if you want to log in as root, /root, .ssh and authorized_keys must belong to root. Otherwise, sshd won't be able to read them and therefore won't be able to tell if the user is authorized to log in.

In your home directory:

chown -R your_user:your_user .ssh
As for rights, go with 700 for .ssh and 600 for authorized_keys

chmod 700 .ssh
chmod 600 .ssh/authorized_keys

Lab Exercise 11 - Create Scripted Pipeline in Jenkins for Automating Builds, Deployments and Code quality checks

We are now going to learn how to implement CI/CD using Jenkins pipelines. Pipelines are better than freestyle jobs because you can use pipelines to write a lot of complex tasks when compared to Freestyle jobs. We can also see how long each stage takes to execute so that we have more control compared to freestyle.

The Jenkins pipeline is written in a groovy based script that has a set of plug-ins integrated for automating build, deployment and test execution. The pipeline defines the build process which typically includes stges for building an application, testing the application and then deploying it. We can use the script generator to generate pipeline code for stages we don't know how to write in groovy code. Pipelines can be categorised into two groups:

  • scripted pipelines
  • declarative pipelines

Scripted pipelines

Scripted piepelines are defined in node blocks:

nods {
  stage('Build') {
    echo 'Building...'
  }
  stage('Test') {
    echo 'Testing...'
  }
  stage('Deploy') {
    echo 'Deploying...'
  }
}

Declarative pipelines

Declarative pipeline can be checked in as part of source control management. The code is defined in a pipeline block and each stage can be executed in parallel in multiple build agents (Slaves):

pipeline {
  agent { label 'slave-node' }
  stages {
    stage('checkout') {
      steps {
        git 'https://bitbucket.org/myrepo'
      }
    }
  
    stage('build') {
      tools {
        gradle 'Maven3'
      }
      steps {
        steps {
          sh 'mvn clean test'
        }
      }
    }
  }
}

Our own scripted pipeline

This link is useful for scripted pipelines: https://www.cidevops.com/2018/12/create-jenkins-pipeline-for-automating.html

We will now create our own scripted pipeline. We already have an EC2 instance on which we have deployed our Jenkins server. A good tip is to start and stop the server as required as this means that we don't lose our configuration. We also need the following plugins which can be installed from the Jenkins plugin page:

  • slack
  • jacoco
  • nexus artifact uploader
  • sonarqube

Next we create a new item as a pipeline job and name it MySecondPipelineJob. We can then use the pipeline syntax generator to create our first pipeline script to download our code from github: image

We can then use this syntax to configure our pipeline: image

We have now successfully checked out the code from github: image

Next we will configure our build and code quality scan:

node {
    def mvnHome = tool 'Maven3'
    stage ("checkout") {
        checkout scmGit(branches: [[name: '*/main']], extensions: [], userRemoteConfigs: [[credentialsId: 'e5ab6423-0076-44e3-aea7-c319e898302a', url: 'git@github.com:TomSpencerLondon/MyApplicationRepo.git']])
    }
    
    stage('build') {
        sh "${mvnHome}/bin/mvn clean install - f MyWebApp/pom.xml"
    }
    
    stage('Code Quality scan') {
        withSonarQubeEnv('SonarQube') {
            sh "${mvnHome}/bin/mvn -f MyWebApp/pom.xml sonar:sonar"
        }
    }
}

Here we have defined mvnHome to refer to the Maven3 tool that we added in Dashboard > Manage Jenkins > Tools:

image

We now add the rest of our configuration:

node {

    def mvnHome = tool 'Maven3'
    stage ("checkout")  {
       copy code here which you generated from step #6
    }

   stage ('build')  {
    sh "${mvnHome}/bin/mvn clean install -f MyWebApp/pom.xml"
    }

     stage ('Code Quality scan')  {
       withSonarQubeEnv('SonarQube') {
       sh "${mvnHome}/bin/mvn -f MyWebApp/pom.xml sonar:sonar"
        }
   }
  
   stage ('Code coverage')  {
       jacoco()
   }

   stage ('Nexus upload')  {
        nexusArtifactUploader(
        nexusVersion: 'nexus3',
        protocol: 'http',
        nexusUrl: 'nexus_url:8081',
        groupId: 'myGroupId',
        version: '1.0-SNAPSHOT',
        repository: 'maven-snapshots',
        credentialsId: '2c293828-9509-49b4-a6e7-77f3ceae7b39',
        artifacts: [
            [artifactId: 'MyWebApp',
             classifier: '',
             file: 'MyWebApp/target/MyWebApp.war',
             type: 'war']
        ]
     )
    }
   
   stage ('DEV Deploy')  {
      echo "deploying to DEV Env "
      deploy adapters: [tomcat9(credentialsId: '4c55fae1-a02d-4b82-ba34-d262176eeb46', path: '', url: 'http://your_tomcat_url:8080')], contextPath: null, war: '**/*.war'

    }

  stage ('Slack notification')  {
    slackSend(channel:'channel-name', message: "Job is successful, here is the info -  Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")
   }

   stage ('DEV Approve')  {
            echo "Taking approval from DEV Manager for QA Deployment"     
            timeout(time: 7, unit: 'DAYS') {
            input message: 'Do you approve QA Deployment?', submitter: 'admin'
            }
     }

stage ('QA Deploy')  {
     echo "deploying into QA Env " 
deploy adapters: [tomcat9(credentialsId: '4c55fae1-a02d-4b82-ba34-d262176eeb46', path: '', url: 'http://your_tomcat_url:8080')], contextPath: null, war: '**/*.war'

}
  stage ('QA notify')  {
    slackSend(channel:'channel-name', message: "QA Deployment was successful, here is the info -  Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")
  }

  stage ('QA Approve')  {
    echo "Taking approval from QA manager"
    timeout(time: 7, unit: 'DAYS') {
      input message: 'Do you want to proceed to PROD Deploy?', submitter: 'admin,manager_userid'
    }
  }

  stage ('PROD Deploy')  {
    echo "deploying into PROD Env "
    deploy adapters: [tomcat9(credentialsId: '4c55fae1-a02d-4b82-ba34-d262176eeb46', path: '', url: 'http://your_tomcat_url:8080')], contextPath: null, war: '**/*.war'

  }
}

After adding the correct urls and credentials we have a successful build: image

Lab 12 - Create Declarative Pipeline - Jenkinsfile to Automate builds, Deployments and Code quality checks

This link is useful for the difference between scripted and declarative pipelines: https://www.cidevops.com/2019/05/jenkins-pipelines-cicd-pipelines.html

Scripted pipeline

  • Scripted pipeline is traditional way of writing pipeline using groovy scripting in Jenkins UI.
  • stricter groovy syntax
  • each stage can not be executed in parallel in multiple build agents(Slaves) that easily.
  • Code is defined within a node block
// Scripted pipeline
node {
    stage('Build') {
        echo 'Building....'
    }
    stage('Test') {
        echo 'Building....'
    }
    stage('Deploy') {
        echo 'Deploying....'
    }
}

Declarative Pipeline (Jenkinsfile)

  • New feature added to Jenkins where you create a Jenkinsfile and check in as part of SCM such as Git.
  • simpler groovy syntax
  • Code is defined within a 'pipeline' block
  • each stage can be executed in parallel in multiple build agents(Slaves)
// Declarative pipeline
pipeline {
    agent { label 'slave-node' }
    stages {
        stage('checkout') {
            steps {
                git 'https://bitbucket.org/myrepo'
            }
        }
        stage('build') {
            tools {
                gradle 'Maven3'
            }
            steps {
                sh 'mvn clean test'
            }
        }
    }
}

Lab Exercise 13 - Implement CICD - Setup Multi branch Pipeline Job in Jenkins

We will now set up a multibranch pipeline job in Jenkins.

What is a multibranch pipline?

For multibranch pipeliens we automatically create new pipelines for every Git branch in source version control. This enables different pipeline implementations for every branch. For instance we may want to have CICD pipeline for the master branch and only CI pipeline for the develop branch. We will automatically discover new branches in source control and automatically create a pipeline for that branch.

How to configure webhooks in Multibranch pipeline

First we add multibranch scan plugin. Then we add the following checkbox to the configuration of our multibranch pipeline: image

The branch is now building: image

We have added the token as a webhook on our github repository.

Lab 14 How to setup Quality gates in SonarQube - Add SonarQube quality gates to the Jenkins build pipeline

SonarQube is a popular static code analysis tool. SonarQube is open-sourced. We can set up quality gates for Sonarqube and force the build to fail in Jenkins when quality gate conditions are not met.

To set up the quality gates we have to add a sonarqube webhook. This link is useful for this: https://www.coachdevops.com/2021/01/how-to-setup-quality-gates-in-sonarqube.html

node {

    def mvnHome = tool 'Maven3'
    stage ("checkout")  {
     //enter your repo info
    }

     stage ('Build')  {
        sh "${mvnHome}/bin/mvn -f MyWebApp/pom.xml clean install"
   }
     stage ('Code Quality scan')  {
       withSonarQubeEnv('SonarQube') {
        sh "${mvnHome}/bin/mvn -f MyWebApp/pom.xml sonar:sonar"
        }
   }
   
     stage("Quality Gate") {
        timeout(time: 1, unit: 'HOURS') {
            waitForQualityGate abortPipeline: true
        }
  }       
}

Bonus Lab Exercise 5 - Install artifactory using Docker Compose - Install Artifactory using Docker Compose on Ubuntu 22.0.4

How to set up JFrog Artifactory using Docker-compose

Artifactory is an open source, binary repository manager. The key features of Artifactory include:

  • support 27 different package types including helm charts, docker images
  • single source of truth for binaries
  • integration with all CI/CD tools
  • role based authorization with teams to manage artifacts
  • create local, remote and virtual repositories

What is Docker Compose?

Docker compose is a tool for defining and running multi-container Docker applications. With a single command we can create and start all services from our configuration. Docker-Compose allows us to issue multiple commands. This link is useful for docker-compose:

Change Host Name to Artifactory

sudo hostnamectl set-hostname Artifactory

Perform System update

sudo apt update

Install Docker-Compose

sudo apt install docker-compose -y

Create docker-compose.yml. This yml has all the configuration for installing Artifactory on Ubuntu EC2.

sudo vi docker-compose.yml

We add our docker-compose.yml:

version: "3.3"
services:
  artifactory-service:
    image: docker.bintray.io/jfrog/artifactory-oss:7.49.6
    container_name: artifactory
    restart: always
    networks:
      - ci_net
    ports:
      - 8081:8081
      - 8082:8082
    volumes:
      - artifactory:/var/opt/jfrog/artifactory

volumes:
  artifactory:
networks:
  ci_net:

Now we execute the compose file using Docker compose command to start Artifactory Container:

sudo docker-compose up -d

and make sure artifactory is up and running:

sudo docker-compose logs --follow

We can check if Artifactory is running by typing:

curl localhost:8081

This link was useful for adding artifactory to Jenkins: https://www.coachdevops.com/2023/01/how-to-integrate-artifactory-with.html

Lab Exercise 15 - How to install Terraform on Ubuntu 22.0.4 | TerraForm Installation on Ubuntu 22.0.4 | Setup Terraform on Ubuntu

This link is useful for installing terraform: https://www.cidevops.com/2020/04/how-to-install-terraform-on-ubuntu-1804.html

To get the latest terraform on Ubuntu:

which terraform
mkdir /opt/terraform
cd /opt/terraform
sudo wget https://releases.hashicorp.com/terraform/1.4.6/terraform_1.4.6_linux_386.zip
unzip terraform_1.4.6_linux_386.zip 
mv terraform /usr/local/bin
terraform --version

Option 1 for Windows laptops - Install Terraform on Windows Laptop If are using Windows laptop, you can also install Terraform on your local machine. link is below: https://www.coachdevops.com/2019/04/terraform-windows-download.html

Option 2 for Apple laptops - Install Terraform on Apple Mac OS https://www.cidevops.com/2020/04/how-to-install-terraform-on-mac-os.html

Option 3 for Ubuntu 18.0.4 EC2 - Install Terraform on Ubuntu Linux OS https://www.cidevops.com/2020/04/how-to-install-terraform-on-ubuntu-1804.html

Note:

This is an overview of scripted, declarative and multibranch pipelines: image

Lab 16 - Provisioning an EC2 instance using Terraform in AWS Cloud

For this lab we are going to create an EC2 instance using Terraform in AWS with an IAM role.

terraform

We will use terraform from our EC2 instance to create new EC2 instances.

This is the variables.tf:

variable "aws_region" {
       description = "The AWS region to create things in."
       default     = "us-east-2"
}

variable "key_name" {
    description = " SSH keys to connect to ec2 instance"
    default     =  "terraform-us-east-2"
}

variable "instance_type" {
    description = "instance type for ec2"
    default     =  "t2.micro"
}

variable "security_group" {
    description = "Name of security group"
    default     = "my-jenkins-security-group-2023"
}

variable "tag_name" {
    description = "Tag Name of for Ec2 instance"
    default     = "my-ec2-instance"
}
variable "ami_id" {
    description = "AMI for Ubuntu Ec2 instance"
    default     = "ami-0b9064170e32bde34"
}

This is the main.tf:

provider "aws" {
  region = var.aws_region
}

resource "aws_vpc" "main" {
  cidr_block = "172.16.0.0/16"
  instance_tenancy = "default"
  tags = {
    Name = "main"
  }
}

        #Create security group with firewall rules
  resource "aws_security_group" "jenkins-sg-2023" {
  name        = var.security_group
  description = "security group for jenkins"

  ingress {
  from_port   = 8080
  to_port     = 8080
  protocol    = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
  from_port   = 22
  to_port     = 22
  protocol    = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
  }

        # outbound from Jenkins server
  egress {
  from_port   = 0
  to_port     = 65535
  protocol    = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
  }

  tags= {
  Name = var.security_group
  }
}

  resource "aws_instance" "myFirstInstance" {
  ami           = var.ami_id
  key_name = var.key_name
  instance_type = var.instance_type
  vpc_security_group_ids = [aws_security_group.jenkins-sg-2023.id]
  tags= {
  Name = var.tag_name
  }
}

        # Create Elastic IP address
  resource "aws_eip" "myElasticIP" {
  vpc      = true
  instance = aws_instance.myFirstInstance.id
  tags= {
  Name = "jenkins_elastic_ip"
  }
}

We use:

  • terraform init
  • terraform plan
  • terraform apply

We also need to create an IAM role to provision EC2 instance in AWS. This article explains quite well: https://www.coachdevops.com/2021/07/how-to-create-ec2-instances-using.html

I added the terraform files here: https://github.com/TomSpencerLondon/terraform-ec2

Lab Exercise 17 - Provisioning an EC2 instance for SonarQube using Terraform in AWS

We can use the following configuration on our EC2 instance to provision a SonarQube EC2 instance:

resource "aws_vpc" "sonar" {
  cidr_block = "172.16.0.0/16"
  instance_tenancy = "default"
  tags = {
    Name = "sonar_vpc"
  }
}

 resource "aws_security_group" "security_sonar_group_2023" {
      name        = "security_sonar_group_2023"
      description = "security group for Sonar"
      ingress {
        from_port   = 9000
        to_port     = 9000
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }

     ingress {
        from_port   = 22
        to_port     = 22
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }

     # outbound from Sonar server
      egress {
        from_port   = 0
        to_port     = 65535
        protocol    = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
      }

      tags= {
        Name = "security_sonar"
      }
    }
  resource "aws_instance" "mySonarInstance" {
  ami           = "ami-0b9064170e32bde34"
  key_name = "your_aws_ssh_key"
  instance_type = "t2.micro"
  vpc_security_group_ids = [aws_security_group.security_sonar_group_2023.id]

  tags= {
  Name = "sonar_instance"
  }
  }

        # Create Elastic IP address for Sonar instance
  resource "aws_eip" "mySonarInstance" {
  vpc      = true
  instance = aws_instance.mySonarInstance.id
  tags= {
  Name = "sonar_elastic_ip"
  }
}

We then use the following commands:

  • terraform plan
  • terraform apply

Lab 18 - How to destroy all resources or a specific resource using Terraform in AWS

In order to destroy all the resources we have created using Terraform we can run the following command:

  • terraform destroy

In order to destroy specific resources we can run:

ubuntu@ip-172-31-3-136:~/project-terraform$ terraform state list
aws_eip.myElasticIP
aws_eip.mySonarInstance
aws_instance.myFirstInstance
aws_instance.mySonarInstance
aws_security_group.jenkins-sg-2023
aws_security_group.security_sonar_group_2023
aws_vpc.main
aws_vpc.sonar

In order to destroy the EC2 instance we can run:

terraform destroy -target aws_instance.myFirstInstance

If we want to delete the security group we can use the following:

terraform destroy -target aws_security_group.jenkins-sg-2023

Lab Exercise 19 - How to automate infrastructure setup in Terraform using Jenkins pipeline

terraform_jenkins This link is useful for using terraform with Jenkins: https://www.coachdevops.com/2021/12/jenkins-pipeline-terraform-integration.html

The repo here contains the scripts we will use: https://github.com/TomSpencerLondon/my-infrastructure-terraform

The backend.tf file manages the state information:

terraform {
  backend "s3" {
    bucket = "my-aws-tf-state-bucket"
    key = "main"
    region = "us-east-1"
    dynamodb_table = "my-dynamo-db-table"
  }
}

We will store this terraform state information on an S3 bucket. In our main.tf:

provider "aws" {
  region = var.aws_region
}

resource "aws_vpc" "main" {
  cidr_block = "172.16.0.0/16"
  instance_tenancy = "default"
  tags = {
    Name = "main"
  }
}

#Create security group with firewall rules
resource "aws_security_group" "jenkins-sg-2022" {
  name        = var.security_group
  description = "security group for Ec2 instance"

  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

 ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

 # outbound from jenkis server
  egress {
    from_port   = 0
    to_port     = 65535
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags= {
    Name = var.security_group
  }
}

resource "aws_instance" "myFirstInstance" {
  ami           = var.ami_id
  key_name = var.key_name
  instance_type = var.instance_type
  vpc_security_group_ids = [aws_security_group.jenkins-sg-2022.id]
  tags= {
    Name = var.tag_name
  }
}

# Create Elastic IP address
resource "aws_eip" "myFirstInstance" {
  vpc      = true
  instance = aws_instance.myFirstInstance.id
tags= {
    Name = "my_elastic_ip"
  }
}

Here we create an EC2 instance, Elastic IP address and a port to access the instance.

We also create an S3 bucket to store the state:

resource "aws_s3_bucket" "my-s3-bucket" {
  bucket_prefix = var.bucket_prefix
  acl = var.acl
  
   versioning {
    enabled = var.versioning
  }
  
  tags = var.tags
}

We use bucket_prefix to create a random unique name for our bucket. We also use a variables file to avoid hardcoding:

variable "aws_region" {
  description = "The AWS region to create things in."
  default     = "us-east-1"
}

  variable "key_name" {
  description = " SSH keys to connect to ec2 instance"
  default     =  "mySep22Key"
}

  variable "instance_type" {
  description = "instance type for ec2"
  default     =  "t2.micro"
}

  variable "security_group" {
  description = "Name of security group"
  default     = "jenkins-sgroup-dec-2021"
}

  variable "tag_name" {
  description = "Tag Name of for Ec2 instance"
  default     = "my-ec2-instance"
}
  variable "ami_id" {
  description = "AMI for Ubuntu Ec2 instance"
  default     = "ami-05e8e219ac7e82eba"
}
  variable "versioning" {
  type        = bool
  description = "(Optional) A state of versioning."
  default     = true
}
  variable "acl" {
  type        = string
  description = " Defaults to private "
  default     = "private"
}
  variable "bucket_prefix" {
  type        = string
  description = "(required since we are not using 'bucket') Creates a unique bucket name beginning with the specified prefix"
  default     = "my-s3bucket-"
}
  variable "tags" {
  type        = map
  description = "(Optional) A mapping of tags to assign to the bucket."
  default     = {
  environment = "DEV"
  terraform   = "true"
  }
}

We will use a Jenkins pipeline to run our terraform scripts.

Ansible - Configuration Management Tool

Ansible connects to servers with ssh and creates groups under a group in an inventory file. The playbook then runs the configuration:

image

What is Ansible?

  • Open source, Red Hat, based on Python language
  • Infrastructure as code tool
  • Configuration Management tool
  • Provides flexibility and robust
  • Ansible is mostly CLI based
  • Ansible Tower has User interface which is the commercial version

Other solutions include, Chef or Puppet

image

Chef or Puppet use the pull model: image

This repo is useful for ansible infrastructure: https://github.com/akannan1087/myAnsibleInfraRepo

How Ansible manages AWS resources: image

Boto is an SDK for python libraries.

This is useful for ansible: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-and-upgrading-ansible

Ansible is idempotent - running once or several times doesn't change the result

Set up Jenkins using Ansible: image

Lab 20 Installing Ansible

Install ansible on linux: https://www.coachdevops.com/2020/04/install-ansible-on-ubuntu-how-to-setup.html

image

This link is useful for mac installation of Ansible: https://www.coachdevops.com/2020/08/how-to-install-ansible-on-mac-os.html

Lab 21 Ansible Infrastructure Automation - New EC2 Setup for Jenkins

In this lab we will create an EC2 instance setup using Ansible playbook.

This link is useful for EC2 with Ansible on ubuntu: https://www.coachdevops.com/2021/07/ansible-playbook-for-provisioning-new.html

This link is useful for ansible with Mac: https://www.cidevops.com/2018/12/ansible-playbook-for-provisioning-new.html

The commands to create EC2 instance using Ansible are:

  • Login to EC2 instance using Git bash or ITerm/putty where you installed Ansible. Execute the below command:
  • Create an Inventory file first
sudo mkdir /etc/ansible
  • Edit Ansible hosts or inventory file
sudo vi /etc/ansible/hosts

Add the below two lines in the end of the file:

[localhost]
local

We can then create our playbook:

cd ~
mkdir playbooks  
cd playbooks
sudo vi create_ec2.yml 

This is the create_jenkins_ec2.yml file:

---
 - name:  provisioning EC2 instances using Ansible
   hosts: localhost
   connection: local
   gather_facts: False
   tags: provisioning

   vars:
     keypair: yourEC2Key
     instance_type: t2.small
     image: ami-007855ac798b5175e
     wait: yes
     group: webserver
     count: 1
     region: us-east-1
     security_group: my-jenkins-security-grp
   
   tasks:

     - name: Task # 1 - Create my security group
       local_action: 
         module: ec2_group
         name: "{{ security_group }}"
         description: Security Group for webserver Servers
         region: "{{ region }}"
         rules:
            - proto: tcp
              from_port: 22
              to_port: 22
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 8080
              to_port: 8080
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 80
              to_port: 80
              cidr_ip: 0.0.0.0/0
         rules_egress:
            - proto: all
              cidr_ip: 0.0.0.0/0
       register: basic_firewall
     - name: Task # 2 Launch the new EC2 Instance
       local_action:  ec2 
                      group={{ security_group }} 
                      instance_type={{ instance_type}} 
                      image={{ image }} 
                      wait=true 
                      region={{ region }} 
                      keypair={{ keypair }}
                      count={{count}}
       register: ec2
     - name: Task # 3 Add Tagging to EC2 instance
       local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
       with_items: "{{ ec2.instances }}"
       args:
         tags:
           Name: MyTargetEc2Instance

We can then run the playbook with:

ubuntu@ip-172-31-37-87:~/playbooks$ ansible-playbook create_ec2.yml

PLAY [provisioning EC2 instances using Ansible] *****************************************************************************************************************************************

TASK [Task] *****************************************************************************************************************************************************************************
changed: [local]

TASK [Task] *****************************************************************************************************************************************************************************
changed: [local]

TASK [Task] *****************************************************************************************************************************************************************************
changed: [local] => (item={'id': 'i-0fe5f9f6567453483', 'ami_launch_index': '0', 'private_ip': '172.31.45.123', 'private_dns_name': 'ip-172-31-45-123.eu-west-2.compute.internal', 'public_ip': '18.168.149.81', 'dns_name': 'ec2-18-168-149-81.eu-west-2.compute.amazonaws.com', 'public_dns_name': 'ec2-18-168-149-81.eu-west-2.compute.amazonaws.com', 'state_code': 16, 'architecture': 'x86_64', 'image_id': 'ami-0eb260c4d5475b901', 'key_name': 'ansible-key', 'placement': 'eu-west-2b', 'region': 'eu-west-2', 'kernel': None, 'ramdisk': None, 'launch_time': '2023-05-24T09:37:20.000Z', 'instance_type': 't2.small', 'root_device_type': 'ebs', 'root_device_name': '/dev/sda1', 'state': 'running', 'hypervisor': 'xen', 'tags': {}, 'groups': {'sg-00d44d51375722a22': 'my-jenkins-security-grp'}, 'virtualization_type': 'hvm', 'ebs_optimized': False, 'block_device_mapping': {'/dev/sda1': {'status': 'attached', 'volume_id': 'vol-0ed945ded76bb092b', 'delete_on_termination': True}}, 'tenancy': 'default'})

PLAY RECAP ******************************************************************************************************************************************************************************
local                      : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

We can then see the ec2 instance created: image

Lab Exercise 22 - Config Mgmnt Automation - Setup Jenkins on Target EC2 using Ansible Playbook

This link is helpful: https://www.coachdevops.com/2021/08/how-to-setup-jenkins-on-ubuntu-using.html

First we should install java:

sudo apt-get update
sudo apt-get install default-jdk -y
java --version

We already have ansible from lab exercise 21.

Next we create the Java 11 Playbook: https://www.cidevops.com/2020/04/ansible-playbook-for-java-11.html

on our controller ansible instance we run:

ssh-keygen

copy the key:

sudo cat ~/.ssh/id_rsa.pub

Now we login to the target node we created in Lab Exercise 21. We open the authorized_keys file:

sudo vi /home/ubuntu/.ssh/authorized_keys

and add our new key.

We then go back to the management node and add the target node address to our hosts:

sudo vi /etc/ansible/hosts
[My_Group]  
xx.xx.xx.xx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=~/.ssh/id_rsa  ansible_python_interpreter=/usr/bin/python3

We can then test our connection:

ansible -m ping all
ansible all -a "whoami"

We then add our Java11 playbook:

cd ~/playbooks
sudo vi installJava11.yml

This is the yaml we add to the file:

---
- hosts: My_Group
  tasks:
    - name: Task - 1 Update APT package manager repositories cache
      become: true
      apt:
        update_cache: yes
    - name: Task -2 Install Java using Ansible
      become: yes
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
           - openjdk-11-jdk

This then installs Java on our target node:

java -version
openjdk version "11.0.7" 2020-04-14
OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-2ubuntu218.04)
OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-2ubuntu218.04, mixed mode, sharing)

Next we can add Jenkins. This link is useful: https://www.cidevops.com/2018/05/install-jenkins-using-ansible-playbook.html

We will add another playbook in our ansible controller node:

sudo vi installJenkins.yml

We can now access Jenkins from our target node ip address: image

Next we will install maven from our controller ansible node to our target node. We create teh installMaven ansible playbook. This link is useful: https://www.cidevops.com/2019/01/install-maven-using-ansible-playbook-on.html

sudo vi installMaven.yml
---
- hosts: My_Group
  tasks:
    - name: Install Maven using Ansible
      become: yes
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
           - maven

ansible-playbook installMaven.yml

Lab 23 - Config. Mgmt Automation - Deploy LAMP stack using Ansible Playbook on target EC2 node

We will now install a LAMP stack on our target node. LAMP stands for Linux, Apache, MySQL and PHP. We will use Ansible playbooks to target our nodes. LAMP Stack comprises the following open-source software applications.

  • Linux – This is the operating system hosting the Applications.
  • Apache – Apache HTTP is a free and open-source cross-platform web server.
  • MySQL– Open Source relational database management system.
  • PHP – Programming/Scripting Language used for developing Web applications.

We can create the Lamp stack using our ansible controller for which we have already run ssh-keygen:

sudo cat ~/.ssh/id_rsa.pub

and then copy the output. We can then log into the instance where we want to create our LAMP stack and open the following:

sudo vi /home/ubuntu/.ssh/authorized_keys

We can then go back to the management node and add the private or public ip address of the node:

sudo vi /etc/ansible/hosts

[LAMP_Group]  
xx.xx.xx.xx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=~/.ssh/id_rsa  ansible_python_interpreter=/usr/bin/python3

We can then add the installation lamp playbook:

sudo vi installLAMP.yml
---
- hosts: LAMP_Group
  tasks:
    - name: Task # 1 - Update APT package manager repositories cache
      become: true
      apt:
        update_cache: yes
    - name: Task # 2 - Install LAMP stack using Ansible
      become: yes
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
           - apache2
           - mysql-server
           - php

and run:

ansible-playbook installLAMP.yml

to execute the playbook.

Screen Shot 2022-12-29 at 5 36 37 PM

We can now see the apache instance from our instance's IP: image

We can also check the installation of php and mysql on the target node:

php --version
mysql --version

Lab Exercise 24 - Config.Mgmt Automation - Terminate EC2 instances using Ansible Playbook

We can also terminate EC2 instances using an Ansible playbook: We can log into our ansible controller and ensure that we have the following:

sudo vi /etc/ansible/hosts 

[localhost]
local

We can then add our terminate.yml file to the playbooks folder on our ansible controller instance:

---
 - name: ec2 provisioning using Ansible
   hosts: local
   connection: local
   gather_facts: False

 - hosts: local
   gather_facts: False
   connection: local
   vars:
     - region: 'us-east-2'
     - ec2_id: 'i-05f39cfb80c97df38'
   tasks:
     - name: Terminate instances
       local_action: ec2
         state='absent'
         instance_ids='{{ ec2_id }}'
         region='{{ region }}'
         wait=True

We can either hard code the ec2 id we wish to terminate otherwise we can delete the hardcoded entry and run the playbook with the following:

ansible-playbook terminate.yml -e ec2_id=i-xxxx

Lab Exercise 25 - Docker setup on Jenkins Ubuntu EC2 instance

We can now work on installing Docker on our Jenkins instance.

Docker is a platform for developers and sysadmins to develop, deploy and run applications with containers. The use of Linux containers to deploy applications is called containerization. Containers are not new but their use for easily deploying applications is quite new.

First we will install Docker on the Ubuntu instance where we have installed jenkins. This link is useful for installing docker: https://www.coachdevops.com/2019/05/install-docker-ubuntu-how-to-install.html

We can then set up a docker registry. There are at least four options: Option 1 - DockerHub as Docker Registry Also, create an account(keep all lowercase in your username) in the below website for storing docker images in public docker registry.. https://cloud.docker.com/

Option 2 - Configure Nexus as Docker Registry Please click below link to configure Nexus as Docker Registry. https://www.cidevops.com/2020/02/how-to-configure-nexus-as-docker.html

Option 3 - Configure AWS ECR as Docker Registry Please click below link to configure Amazon ECR as Docker Registry. https://www.cidevops.com/2020/05/how-to-setup-elastic-container-registry.html

image

Option 4 - Configure Azure Container Registry Please click below link to configure ACR in Azure. https://www.coachdevops.com/2019/12/how-to-upload-docker-images-to-azure.html

image

Step 1 - Create Azure Container Registry (ACR)

Go to https://portal.azure.com/ Create an ACR repository.

Step 2 - Download sample Python App

Go to your machine where you have docker images stored. perform below command to download sample pythonapp which is already dockerized.

git clone https://bitbucket.org/ananthkannan/mydockerrepo/src/master/pythonApp/
cd  pythonApp/pythonApp

Step 3 - Create docker image

docker build . -t mypythonapp

Then type the below command:

sudo docker images

Log into the ecr repository using the username and password.

Step 4 - Tag and Push your docker image to ACR Now tag the docker image per as below:

sudo docker tag mypythonapp mydockerazureregistry.azurecr.io/mypythonapp
sudo docker push mydockerazureregistry.azurecr.io/mypythonapp

Lab 26 - Docker Labs - How to Create Docker image and Upload Docker image into Amazon Elastic Container Registry

This is a recap on the steps for configuring Amazon ECR as Docker Registry and uploading docker images into ECR manually through command line:

Steps for configuring Amazon ECR as Docker Registry and Creating/uploading Docker images(Manual way)

https://www.cidevops.com/2020/05/how-to-setup-elastic-container-registry.htm

image

Lab 27 - Docker - Automate Upload of Docker images to Amazon ECR using Jenkins Pipeline

This link is useful: https://www.cidevops.com/2020/07/automate-docker-builds-using-jenkins.html

For this lab will automate the following using the Jenkins pipeline:

  • Creating docker images
  • Uploading docker images into Amazon ECR
  • Deploying docker containers in Jenkins
  • Access pythonApp in Jenkins that is hosted inside the docker container.

image

Our Jenkins Pipeline will:

  • Automate builds
  • Automate Docker image creation
  • Automate Docker image upload into AWS ECR
  • Automate Docker container provisioning

Some tips for this lab include:

Lab 28 - Setup Nexus Docker Registry and Upload Docker images to Nexus Docker Registry

We will now upload docker images to Nexus:

ubuntu@ip-172-31-37-246:~$ sudo docker login -u admin -p http://18.134.134.72:8085
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get "https://registry-1.docker.io/v2/": unauthorized: incorrect username or password
ubuntu@ip-172-31-37-246:~$ sudo docker login -u admin -p password http://18.134.134.72:8085
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
ubuntu@ip-172-31-37-246:~$ git clone https://bitbucket.org/ananthkannan/mydockerrepo; cd mydockerrepo/pythonApp
Cloning into 'mydockerrepo'...
Receiving objects: 100% (167/167), 16.73 KiB | 951.00 KiB/s, done.
Resolving deltas: 100% (70/70), done.
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ ls
Dockerfile  app.py  install_steps.txt  requirements.txt  templates
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ sudo docker build . -t 18.134.134.72:8085/mypythonapp123
Sending build context to Docker daemon  10.75kB
Step 1/8 : FROM alpine:3.5
 ---> f80194ae2e0c
Step 2/8 : RUN apk add --update py2-pip
 ---> Using cache
 ---> 56204fb60efa
Step 3/8 : COPY requirements.txt /usr/src/app/
 ---> Using cache
 ---> 7331cd3dc4e1
Step 4/8 : RUN pip install --no-cache-dir -r /usr/src/app/requirements.txt
 ---> Using cache
 ---> 157ea9e24e54
Step 5/8 : COPY app.py /usr/src/app/
 ---> Using cache
 ---> deada879e8e5
Step 6/8 : COPY templates/index.html /usr/src/app/templates/
 ---> Using cache
 ---> 64be068b5512
Step 7/8 : EXPOSE 5000
 ---> Using cache
 ---> b252491210c4
Step 8/8 : CMD ["python", "/usr/src/app/app.py"]
 ---> Using cache
 ---> 90726e9e6329
Successfully built 90726e9e6329
Successfully tagged 18.134.134.72:8085/mypythonapp123:latest
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ sudo docker push 18.134.134.72:8085/mypythonapp123
Using default tag: latest
The push refers to repository [18.134.134.72:8085/mypythonapp123]
b02d32ef023b: Pushed 
2fa3d7ef13c2: Pushed 
881df3d74af1: Pushed 
d9618a7a5dd1: Pushed 
cf9645d2305d: Pushed 
f566c57e6f2d: Pushed 
latest: digest: sha256:5099ce8e9955842a6efaf19b1f4e504cfb150b248a8798551283374fa0859e1e size: 1571

We can now see our docker image on nexus: image

Lab 29 - Containerize PHP App & Automate Docker image creation

This link is useful: https://www.coachdevops.com/2020/05/automate-docker-builds-using-jenkins_3.html

We will now automate building a PHP docker application for the following steps:

  1. Automating Docker image creation
  2. Automating Upload of Docker images to Docker registry
  3. Automating running Docker containers in Jenkins

First we create credentials for our docker repository using our docker login and password. We will refer to the id of the credentials in our pipeline.

Next we create a scripted pipeline with the following pipeline:

pipeline {
    agent any 
    environment {
        //TODO # 1 --> once you sign up for Docker hub, use that user_id here
        registry = "your_docker_userid/myphp-app-may20"
        //TODO #2 - update your credentials ID after creating credentials for connecting to Docker Hub


        registryCredential = 'your_credentials_id_from_step 1_above'
        dockerImage = ''
    }
    
    stages {
        stage('Cloning Git') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '', url: 'https://bitbucket.org/ananthkannan/phprepo/']]])       
            }
        }
    
    // Building Docker images
    stage('Building image') {
      steps{
        script {
          dockerImage = docker.build registry
        }
      }
    }
    
     // Uploading Docker images into Docker Hub
    stage('Upload Image') {
     steps{    
         script {
            docker.withRegistry( '', registryCredential ) {
            dockerImage.push()
            }
        }
      }
    }
    
     // Stopping Docker containers for cleaner Docker run
     stage('docker stop container') {
         steps {
            sh 'docker ps -f name=myPhpContainer -q | xargs --no-run-if-empty docker container stop'
            sh 'docker container ls -a -fname=myPhpContainer -q | xargs -r docker container rm'
         }
       }
    
    
    // Running Docker container, make sure port 8096 is opened in 
    stage('Docker Run') {
     steps{
         script {
            dockerImage.run("-p 8086:80 --rm --name myPhpContainer")
         }
      }
    }
  }
}  

We then build the pipeline and can access the url: image

Lab 30 - Kubernetes Labs - Amazon EKS Cluster setup in AWS using eksctl

We will now set up a Kubernetes Cluster in AWS using the eksctl command. This link is useful: https://www.coachdevops.com/2022/02/create-amazon-eks-cluster-by-eksctl-how.html

Amazon EKS is a fully managed container orchestration service. EKS allows you to quickly deploy a production ready Kubernetes cluster in Azure, deploy and manage containerized applications more easily with a fully managed Kubernetes service. EKS takes care of the Master node/Control plane.

EKS clusters can be created in following ways:

  1. AWS console

  2. AWS CLI

  3. eksctl command

  4. using Terraform

  5. For this lab we will set up worker nodes with eksctl. We will also set up a cluster in AWS.

First we install the AWS cli:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" 
sudo apt install unzip
sudo unzip awscliv2.zip  
sudo ./aws/install
aws --version

We can then run the following commands to install eksctl: https://www.coachdevops.com/2020/10/install-eksctl-on-linux-instance-how-to.html

ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ sudo mv /tmp/eksctl /usr/local/bin
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ eksctl version
0.142.0
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ eksctl version

Next we install kubectl on our ubuntu jenkins instance: https://www.coachdevops.com/2022/05/install-kubectl-on-ubuntu-instance-how.html

ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ sudo curl --silent --location -o /usr/local/bin/kubectl   https://s3.us-west-2.amazonaws.com/amazon-eks/1.22.6/2022-03-09/bin/linux/amd64/kubectl
sudo chmod +x /usr/local/bin/kubectl 
ubuntu@ip-172-31-37-246:~/mydockerrepo/pythonApp$ kubectl version --short --client
Client Version: v1.22.6-eks-7d68063

image

Lab 31 - Kubernetes Labs - Deploy Springboot Microservices App into EKS Cluster using Jenkins Pipeline and Kubectl CLI plug-in

In this lab we will deploy a Springboot microservices containerized app into Amazon EKS cluster by creating a Jenkins Pipeline. We will achieve the following by creating a Jenkins pipeline:

  • Automating builds
  • Automating Docker image creation
  • Automating Docker image upload into ECR
  • Automating Docker containers Deployments into Kubernetes Cluster using Kubectl CLI plug-in

Pre-requisites:

  1. Amazon EKS Cluster is set up and running. Click here to learn how to create an Amazon EKS cluster.
  2. ECR repo created to store docker images.
  3. Jenkins Master is up and running
  4. Docker, Docker pipeline and Kubectl CLI plug-ins are installed in Jenkins

This link is useful for starting an eks cluster: https://www.coachdevops.com/2022/02/create-amazon-eks-cluster-by-eksctl-how.html We will start by recreating our cluster:

eksctl create cluster --name demo-eks --region us-east-1 --nodegroup-name my-nodes --node-type t3.small --managed --nodes 2

To delete the cluster we would run:

eksctl delete cluster --name demo-eks --region us-east-1

Agenda:

  • What is an EKS cluster?
  • What are the different ways to create EKS cluster?
  • How to deploy Microservices into EKS cluster using Jenkins pipeline
  • Set up Jenkins and install Docker and required plugins
  • Create cluster using eksctl
  • Create Jenkins Pipeline to deploy Microservices into EKS cluster
  • Verify deployment using kubectl
  • Access the Microservices app

image

The code we will use: https://github.com/TomSpencerLondon/springboot-app

image

EKS is a fully managed control plane from AWS. It allows us to avoid worrying about anything other than the Worker Nodes.

We have several options for creating eks clusters: https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

  • using eksctl
  • using AWS management console
  • AWS cli command to create a cluster

Lab 32 - Kubernetes Labs - Deploy Python App into EKS Cluster using Jenkins Pipeline/Kubectl CLI

In this lab with will automate deployment of Python containerized app into Amazon EKS cluster using Jenkins Pipeline. We will achieve the following:

  • Automating builds using Jenkins
  • Automating Docker image creation
  • Automating Docker image upload into AWS ECR
  • Automating Deployments to Kubernetes Cluster

Pre-requisites:

  1. Amazon EKS Cluster is set up and running. Click here to learn how to create an Amazon EKS cluster: https://www.coachdevops.com/2022/02/create-amazon-eks-cluster-by-eksctl-how.html
  2. Jenkins Master is up and running https://www.coachdevops.com/2020/04/install-jenkins-ubuntu-1804-setup.html
  3. Docker, Docker pipeline and Kubectl CLI plug-ins are installed in Jenkins
  4. ECR repo created to store docker images.

This link is useful for deploying our python app to the kubernetes cluster: https://www.cidevops.com/2022/01/deploy-python-app-into-kubernetes.html

Lab 33 - How to setup monitoring on EKS Cluster using Prometheus and Grafana

In this lab we will work on setting up monitoring for EKS Cluster in AWS using Prometheus and Grafana:

What is Prometheus?

  • Prometheus is an open source monitoring tool
  • Provides out-of-the-box monitoring capabilities for the Kubernetes container orchestration platform. It can monitor servers and databases as well.
  • Collects and stores metrics as time-series data, recording information with a timestamp
  • It is based on pull and collects metrics from targets by scraping metrics HTTP endpoints.

What is Grafana?

  • Grafana is an open source visualization and analytics software.
  • It allows you to query, visualize, alert on, and explore your metrics no matter where they are stored.

Prometheus and Grafana

Key Components

  1. Prometheus server - Processes and stores metrics data
  2. Alert Manager - Sends alerts to any systems/channels
  3. Grafana - Visualize scraped data in UI

Installation Method:

There are many ways we can set up Prometheus and Grafana. We can install in the following ways:

  1. Create all configuration files of both Prometheus and Grafana and execute them in the right order.
  2. Prometheus operator - to simplify and automate the configuration and management of the Prometheus monitoring stack running on a Kubernetes cluster
  3. Helm chart (recommended). Using helm we can install Prometheus Operator and Grafana.

Why use Helm?

Helm is a package manager for Kubernetes. Helm simplifies the installation of all components in one command. Install using Helm is recommended as we will not be missing any configuration steps and the process will be very efficient.

Prerequisites

ubuntu@ip-172-31-37-246:~$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
ubuntu@ip-172-31-37-246:~$ sudo chmod 700 get_helm.sh
ubuntu@ip-172-31-37-246:~$ sudo ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.12.0-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
ubuntu@ip-172-31-37-246:~$ helm version --client
version.BuildInfo{Version:"v3.12.0", GitCommit:"c9f554d75773799f72ceef38c51210f1842a1dea", GitTreeState:"clean", GoVersion:"go1.20.3"}
ubuntu@ip-172-31-37-246:~$ sudo su jenkins
jenkins@ip-172-31-37-246:/home/ubuntu$ helm version --client
version.BuildInfo{Version:"v3.12.0", GitCommit:"c9f554d75773799f72ceef38c51210f1842a1dea", GitTreeState:"clean", GoVersion:"go1.20.3"} 

After configuration, we can see our graphana dashboard: image

Bonus Lab 06 - Ansible Infrastructure Automation - Provision new EC2 instance creation

Screen Shot 2022-12-31 at 8 56 16 AM

This link is useful for setting up ansible automation for provisioning a new EC2 instance: https://www.coachdevops.com/2022/12/automate-ec2-provisioning-in-aws-using.html

Lab 34- Kubernetes Labs - Deploy Springboot Microservices App into EKS Cluster using Jenkins Pipeline and Helm

In this lab we will work to deploy Springboot Microservices app into EKS cluster using Jenkins pipeline and Helm. This link is useful for this challenge: https://www.coachdevops.com/2023/05/how-to-deploy-springboot-microservices_13.html

What is Helm?

Helm is a package manager for Kubernetes. Helm is the K8s equivalent of yum or apt. It accomplishes the same goals as Linux system package managers like APT or YUM: managing the installation of applications and dependencies behind the scenes and hiding the complexity from the user. This link is useful for Helm: https://www.coachdevops.com/2021/03/install-helm-3-linux-setup-helm-3-on.html

Sample springboot App Code:

For this lab we will use this Spring application: https://github.com/TomSpencerLondon/docker-spring-boot

Our Jenkins pipeline will:

  • Automate maven build(jar) using Jenkins
  • Automate Docker image creation
  • Automate Docker image upload into Elastic container registry(ECR)
  • Automate Springboot docker container deployments into Elastic Kubernetes Cluster using Helm charts

Screenshot 2023-05-13 at 8 09 11 AM

Pre-requisites:

  1. EKS cluster needs to be up running. This link is useful for deploying a kubernetes cluster: https://www.coachdevops.com/2022/02/create-amazon-eks-cluster-by-eksctl-how.html

  2. Jenkins instance is up and running

  3. Install AWS CLI on Jenkins instance

  4. Helm installed on Jenkins instance

  5. Install Kubectl on Jenkins instance

  6. Install eksctl on Jenkins instance

  7. Install Docker in Jenkins and Jenkins have proper permission to perform Docker builds

  8. We should also install Docker and Docker pipeline on Jenkins: docker-plugins

  9. Create ECR repo in AWS

  10. Dockerfile created along with the application source code for springboot App.

  11. Namespace created in EKS cluster

Lab 35 - Create Azure DevOps & Azure account for Azure Cloud Labs

For this lab we will start working with azure Cloud We will learn how to migrate applications into Azure Cloud using Azure DevOps.

What is Azure Devops

This link is useful for more information about Azure DevOps: https://www.coachdevops.com/2019/02/azure-devops-tutorial-learn-azure.html Azure DevOps (previously known as VSTS) is Microsoft's cloud based offering for any technology stack, any platform to turn idea into a product. We can migrate any applications into Azure by building pipelines in Azure Devops. These are the services provided by Azure Devops.

introducing-azure-devops

  1. Azure Boards We can quickly and easily start tracking tasks, features, and bugs associated with our project. We do this by adding one of three work item types—epics, issues, and tasks—that the Basic process provides. As works progresses from not started to completed, we update the State workflow field from To Do, Doing, and Done.

  2. Azure Repos We can create free public, private git repositories and collaborate by creating pull requests, code reviews.

  3. Azure Pipelines Azure Pipelines help us implement a build, test, and deployment pipeline for any app. We can either use YAML to define your pipelines or use the visual designer to do the same.

  4. Azure Test Plans We can test your application code by writing test cases. We can create and run manual test plans, generate automated tests and collect feedback from users.

  5. Azure Artifacts Azure Artifacts provides an easy way to share our artifacts across our entire team or company. By storing our artifacts in a private repository within Azure Artifacts, our team can quickly download or update them.

VSTS feature name Azure DevOps service name Description
Build & release Azure Pipelines Continuous integration and continuous delivery (CI/CD) that works with any language, platform, and cloud.
Code Azure Repos Unlimited cloud-hosted private Git and Team Foundation Version Control (TFVC) repos for projects.
Work Azure Boards Work tracking with Kanban boards, backlogs, team dashboards, and custom reporting.
Test Azure Test Plans All-in-one planned and exploratory testing solution.
Packages (extension) Azure Artifacts Maven, npm, and NuGet package feeds from public and private sources.

Bonus Lab 7 - How to setup Dynamic(On demand) Jenkins Slaves (build agents) using Docker Containers

In this lab we will set up dynamic Jenkins slaves (build agents) using Docker. The slave (build agent) helps Jenkins to acheive distributed builds. In this lab we will learn how to create slaves on demand. This link is useful for setting up docker containers as build agents: https://www.coachdevops.com/2022/08/jenkins-build-agent-setup-using-docker.html

Screen Shot 2022-08-15 at 11 35 54 AM

Advantages of using Docker Containers as Jenkins Build Agents

  • Ephemeral
  • Better resource utilization
  • Customized agents as it can run different builds like Java 8, Java 11
  • Scalability

DevOps Interview Preparation - Interview Questions, Useful Tips and Guide

How can you become a successful DevOps engineer?

https://www.coachdevops.com/2021/02/top-devops-skills-for-2021-skills.html

Top DevOps skills for 2023

  1. Any cloud knowledge and experience - AWS, Azure and Google cloud
  2. Linux knowledge and scripting - basic troubleshooting, intermediate scripting, looking at the logs
  3. Experience in Git, GitHub, Bitbucket or any version control systems such as SVN, TFVC
  4. Experience in Continuous integrations tools such as Jenkins, TeamCity, Circle CI
  5. Experience in Infrastructure automation tools such as Terraform, AWS cloud formation
  6. Experience in Configuration Management tools such as Ansible, Puppet or Chef
  7. Experience in containers such as Docker and Kubernetes
  8. Experience in basic to intermediate Scripting. Advanced scripting needed for Sr.Devops folks.
  9. Ability to troubleshoot in case builds, deployments failure.

Soft skills

These days employers are not only looking for strong technical skills but also looking "soft skills" which are essentials to become successful in IT. If you think if you are lagging on any of these skills, no worries. All these skills can be developed and improved over period of time by practicing.

  1. Open minded
  2. Willingness to learn new skills
  3. Communication
  4. Approachable
  5. "Get it done" attitude
  6. Being adaptable

Top 10 DevOps tools you should focus to prepare your DevOps Career on a faster track:

https://www.cidevops.com/2020/04/top-10-devops-popular-tools-popular.html

Here are the top 10 DevOps Tools to focus on to put your DevOps learning in fast-track and kick start your career quickly as a Cloud or DevOps engineer in about 8 weeks from now.

  1. Terraform # 1 Infrastructure automation tool
  2. Git – BitBucket/GitHub/Azure Git - # 1 - SCM tool
  3. Jenkins - Create CICD Pipelines - scripted, declarative - # 1 CI tool
  4. Ansible- # 1 Configuration Management tool
  5. Docker- # 1 Container platform
  6. Kubernetes # 1 container orchestration tool
  7. Azure DevOps, Pipelines – provides platform for migrating applications to Azure Cloud
  8. SonarQube – # 1 Code quality tool
  9. Slack – # 1 Collaboration tool
  10. Nexus – # 2 Binary repo manager

DevOps Interview Questions Part - I

http://www.coachdevops.com/2018/03/devops-interview-questions.html

DevOps Interview Questions Part - II

https://www.coachdevops.com/2018/04/devops-interview-questions-part-2.html

DevOps Interview Questions Part - III

https://www.coachdevops.com/2018/07/popular-devops-interview-questions-top.html

Docker Interview Questoins

https://www.coachdevops.com/2019/10/docker-interview-questions-popular.html

DevOps Interview Tips

https://www.coachdevops.com/2019/07/tips-for-attending-devops-interviews.html

What questions to ask after the interview is done

https://www.coachdevops.com/2019/01/questions-to-ask-after-you-are-done.html

TODO checklist after the course is done

https://www.coachdevops.com/2018/07/todo-checklist-after-finishing-devops.html

What is Kubernetes?

  • Based on client server model:
    • server - control plane
    • clients - worker nodes

What problems does Kubernetes solve?

  • microservices (no more monolithic)
  • increased use of containers
  • increased demand for managing those containers in a proper way
  • Tool to manage containers life cycle

What features Kubernetes offers?

  • High availability or no downtime (huge demand for website - highly available - no downtime)
    • Delete worker node - another one is spun up
  • Scalability or high performance
    • set up for increasing pods when more demand required
  • Disaster recovery - Backup and restore
    • one pod goes down -> pod will be deleted and another pod will be run
  • Load balancing
    • traffic and routing traffic - spreading across multiple routes
    • route traffic to available pods

Popular container orchestration platforms

  • EKS - Amazon Elastic Kubernetes Service
  • AKS - Azure Kubernetes Service
  • GKE - Google Kubernetes Engine
  • RedHat - Openshift
  • Nomad by Hashicorp
  • Apache Mesos
  • Docker Swarm - cloud native of Docker (run only docker containers)

Kubernetes Basic Architecture

  • Kubernetes is formed by a cluster of servers called nodes - master and worker
  • cluster should have at least one master node and couple of worker nodes, each node has kubelet running on it
  • worker node does the actual work where it runs docker containers of different applications
  • Kubelet is a kubernetes process that makes all cluster nodes communicate with one another

image

How to interact with a Kubernetes cluster?

  • Kubectl - command line tool for accessing Kubernetes cluster
    • configuration file ~/.kube/config - information for finding and accessing a cluster
  • Dashboard UI - directly using the browser (not recommended)
  • Direct access to API using like curl command
  • Helm Charts - Deploy Microservices
  • Rancher - UI tool for accesssing K8s cluster

Lab 36 - Azure CICD - How to Migrate existing MyWebApp from GitHub to Azure Cloud

We will now look at migrating existing Java Applications(MyWebApp) that we had set up in GitHub to Azure Cloud using Azure Pipelines.

image

This is my devops project: image

Lab 37 - Azure Labs - Pipeline as a Code(YAML) - How to migrate Java App from GitHub to Azure Cloud using Azure YAML Pipelines

This link is useful for the steps to migrate an existing Java WebApp (MyWebApp) that you have set up in GitHub into Azure Cloud using Azure YAML pipelines:

https://www.coachdevops.com/2022/04/create-azure-pipeline-using-yaml-create.html

Lab 38 - Azure CICD - How to migrate Java App from BitBucket to Azure Cloud using Azure Pipelines

This link is useful for migrating existing Java WebApp (MyWebApp) that you have set up in Bitbucket into Azure Cloud: https://www.coachdevops.com/2020/08/how-to-migrate-apps-from-bitbucket-to.html

Lab 39 - Azure Hands on Lab - Slack Integration with Azure DevOps Pipelines

In this lab we integrate Slack(sending push notifications to Slack channels) after every Build in Azure DevOps: Slack/AzureDevOps Integration https://www.cidevops.com/2019/01/how-to-integrate-slack-with-vsts-azure.html

Lab 40 - Azure Hands on - Create Ubuntu 20.0.4 VM in Azure Cloud

In this lab we create an Ubuntu 20.0.4 Virtual Machine(server) in Azure Cloud.

Install Java and Maven on this VM to create Java Project to setup in Azure Repos

https://www.coachdevops.com/2023/04/how-to-create-ubuntu-2004-virtual.html

image

Lab 40 - Azure Labs - SonarQube Setup on Azure VM and Integration with Azure DevOps

SonarQube is a static code quality/analysis tool which will scan application source code and find defects/issues in the code.

SonarQube Setup on Azure VM (New Article!) https://www.coachdevops.com/2023/02/how-to-setup-sonarqube-on-vm-in-azure.html Please find below the steps for integrating SonarQube with Azure DevOps and Perform Code Analysis in Azure Pipelines:

SonarQube/Azure DevOps Integration(new link!) https://www.coachdevops.com/2023/02/how-to-integrate-sonarqube-with-azure.html

image

image

Lab 41 - Azure Hands on - Create Ubuntu 20.0.4 VM in Azure Cloud

In this lab we will create an Ubuntu 20.0.4 Virtual Machine(server) in Azure Cloud.

Install Java and Maven on this VM to create Java Project to setup in Azure Repos

https://www.coachdevops.com/2023/04/how-to-create-ubuntu-2004-virtual.html

Lab 42 - Azure Labs - How to Create Docker images and Upload into Azure Container Registry (ACR)

This link is useful for installing docker: https://www.coachdevops.com/2019/05/install-docker-ubuntu-how-to-install.html

We will now look at the steps for creating docker images and hosting Docker images in Azure Container Registry. https://www.coachdevops.com/2019/12/how-to-upload-docker-images-to-azure.html

image

Lab 43 - Azure Labs on IAC- Create Azure WebApp (App Service) using Terraform

In this lab we automate Azure WebApp Creation using Terraform scripts and Azure CLI.

Link is below: https://www.cidevops.com/2023/02/how-to-create-azure-resources-using.html

Devops coach - 13 June

image

About

Practice with Devops Coach


Languages

Language:Java 100.0%