We have an application running in kubernetes, that application is saying "Pls subscribe, like and comment on this video. TY!!!", and the magic starts happening when we change the code and push it to github. As soon as you commit the changes, a jenkins job get submitted automatically, builds a new image, pushes the image to Dockerhub, changes the deployment file with the latest image id, the new image automatically gets deployed to the kubernetes cluster using gitops, and our application starts pointing to the new pod.
- GitOps Workflow
- What is GitOps?
- Dockerfile and Jenkinsfile Walkthrough
- Jenkins Installation
- Jenkins Jobs Setup
- ArgoCD (GitOps) Installation
- ArgoCD (GitOps) Setup
- Automating GitHub to jenkins using Webhook
- Zero touch end to end (nirvana!)
- Aws account
- Github account
- Dockerhub account
- Jenkins Installed
- Argocd Installed
GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation.
GitOps is used to automate the process of provisioning infrastructure. Similar to how teams use application source code, operations teams that adopt GitOps use configuration files stored as code (infrastructure as code). GitOps configuration files generate the same infrastructure environment every time it’s deployed, just as application source code generates the same application binaries every time it’s built.
- Periodically syncs the running cluster with the desired state in Git Repo
- Works with both vanilla manifest files or Helm charts
- Reduced learning curve than Devops
- Increased security
- CI (Developer) and CD (Ops) permissions are seperated
- GitOps doesn't mean getting rid of DevOps
Now lets jump in the Github repository first.
This is the kubernetescode repository where we have our applicationfile and dockerfile.
Our application code is app.py. It is a very simple python program, which is importing the library flask and just returning pls subscribe, like and comment on this video, TY!!!
The file requirements.txt list the external library flask, and in this case we are specifically using the version 2.1.0
The dockerfile dockerizes that python program and creates a container image. It is using the base python 3.8 docker image, and then its copying over the requirement file running a pip install of the flask. There it is running the python program accepting incoming connection.
node {
def app
stage('Clone repository') {
checkout scm
}
stage('Build image') {
app = docker.build("georgenal/test")
}
stage('Test image') {
app.inside {
sh 'echo "Tests passed"'
}
}
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
app.push("${env.BUILD_NUMBER}")
}
}
stage('Trigger ManifestUpdate') {
echo "triggering updatemanifestjob"
build job: 'updatemanifest', parameters: [string(name: 'DOCKERTAG', value: env.BUILD_NUMBER)]
}
}
This the jenkinsfile which is for the job that is creating the container image.
In the first stage it clones this repository into the jenkins enviroment and then it builds the container image.
The next stage is a dummy place holder.
In the next stage is where i push the image to dockerhub.
And in the last stage, we trigger another jenkins job to update the deployment file, and the name of the this job is updatemanifest
This is the kubernetesmanifest repository which contains a jenkinsfile and a deploymentfile for the jenkins job to update the deployment.
If we go to the deployment.yaml, the container image is referencing to the latest tag.
And in the next step, we are creating a loadbalancer service to talk to the container.
node {
def app
stage('Clone repository') {
checkout scm
}
stage('Update GIT') {
script {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
withCredentials([usernamePassword(credentialsId: 'github', passwordVariable: 'GIT_PASSWORD', usernameVariable: 'GIT_USERNAME')]) {
//def encodedPassword = URLEncoder.encode("$GIT_PASSWORD",'UTF-8')
sh "git config user.email onalotech7@gmail.com"
sh "git config user.name georgeonalo"
//sh "git switch main"
sh "cat deployment.yaml"
sh "sed -i 's+georgenal/test.*+georgenal/test:${DOCKERTAG}+g' deployment.yaml"
sh "cat deployment.yaml"
sh "git add ."
sh "git commit -m 'Done by Jenkins Job changemanifest: ${env.BUILD_NUMBER}'"
sh "git push https://${GIT_USERNAME}:${GIT_PASSWORD}@github.com/${GIT_USERNAME}/kubernetesmanifest.git HEAD:main"
}
}
}
}
}
This is the jenkinsfile for updating the deployment file.
The first step is similar, it clones this repository in the jenkins enviroment, and in the second stage, it updates the file.
For detailed explanation on how to install jenkins on ec2 (not on your local machine because we are going to need webhook) see the aws official documentation page.
To do this, on the jenkins home page, click manage jenkin, manage credntials, under Global click jenkins, and then add credentials scroll down to the id field, label it "dockerhub".
add another credentials again, scroll down to the id field and label it "github".
Pls note the username here respectively is the username for your github and dockerhub accounts not the email id used for login.
And also, the password is not the login password, instead it should be a personal access token that is generated respectively from both github and dockerhub accounts.
To create a jenkins job, click new item, and enter the the name "buildimage", select pipeline and click "ok"
Scroll down to pipeline, select pipeline script from SCM, and then Git under SCM
Go back to kubernetescode repo, click code and copy the http url and then paste it in the repository url.
Under branch specifier, change it to main
and finally click save
like before, click new item, and enter the the name "updatemanifest", select pipeline and click "ok"
Select this project is parametalized and then add a string parameter. Name of the parameter is DOCKERTAG
Default value is latest but it will be overide from the jenkins job.
Again like before, Scroll down to pipeline, select pipeline script from SCM, and then Git under SCM
Go back to kubernetesmanifest repo, click code and copy the http url and then paste it in the jenkins repository url.
Under branch specifier, change it to main
Now lets try to manually run the jobs that we have just created, go to the dashboard and select the "buildimage" job
Click "build now"
Our job gets built and this automatically triggers the updatemanifest job
Going to our dockerhub we see that our very first image is now in the repository.
We are done with setting, building and triggering our jobs, next step is to install ArgoCD
For detailed instruction on how to install ArgoCD, click on the official ArgoCD documentation page.
Accessing my ArgoCD UI and login in
Next we have to point GitOps(ArgoCD) to our kubernetesmanifest repository and deploy this app.
To do this, go to the ArgoCD console and click New app. Enter the name flaskdemo, project: default, SYNC POLICY: Automatic.
keep everything as it is and scroll down to Repository url which has to be pointed to our kubernetesmanifest repo. so go to the kubernetesmanifest repo and copy the https url and paste it.
Under path: enter "./", under DESTINATION, select "kubernetes.default.svc", under namespace, enter default. keep everything as it is and hit the create button
And our application is deployed, click on it, and it will show you the flow, it created the loadbalancer, and behind the loadbalancer, there are three pods
If we go back to our terminal and run kubectl get pod, we will see our three pods
To get the loadbalancer url, run kubectl get svc
copy the loadbalancer url and paste it on your browser and enter.
yessssssssss! and it worked. It is accessing the flask application.
Now the only thing left is to automatically trigger the jenkins job when ever i push my python code to the repository.
To setup the webhook, go to jenkins dashboard and copy the url, then go to your kubernetescode repository and click on settings and then select webhook, click add webhook, and then enter the url of the jenkins you just copied and then add github-webhook/ to it, this must be done for it to work.
Under content type: select application/json and then select just the push event, then hit the add webhook button in green.
Now go back to the jenkins job and tell it to trigger the job any time it receives a webhook.
Go to the buildimage jenkins job, click configure, and select Github hook trigger for Gitscm polling, click save.
Now everything is setup, it should be zero touch.
Go to the application code and change something.
I changed the pls subscribe, comment and like this video to Hello, Docker project.
scroll down and commit the changes.
This should automatically trigger the buildimage using the webhook.
sure it did!!
Check the updatemanifest job, this should have also been automatically triggered.
Perfect!!!
Now go to Dockerhub and refresh,
Here you can see my newest image tag
and if you go to the deployment.yaml file, this should have the new tag as well.
Yes it is.
And if we back to our ArgoCD flow, you will notice it has detected some changes, its terminating the old pods and creating new pods
Go to the terminal and run kubectl get pods to see the new pods.
yes we are on track, and finally go to the application page and refresh your browser.
Hurraaaaaaaaaaay!!!!
As expected, the kubernetes loadbalancer service is pointing to the updated docker image with the updated code.
Alright folks, we did it, end to end deployment into kubernetes cluster using jenkins, devops and gitops.