hatmarch / argocd-demo

Repo demonstrating ArgoCD on OpenShift (with Tekton)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ArgoCD Demo

Introduction

This repo contains all the scripts and code (as submodules) needed to create a demo that supports this walkthrough that demonstrates the basic functionality of argo:

  • Deployment on merging of pull request

  • Differential deployment using Kustomize

Tip
There is a .devcontainer in this repo so if you have VisualStudio Code with remote extensions enabled, you can open repo folder in a container that has all the tools necessary to run this demo already installed.

This demo is still a work in progress. See here for a list of items that are as of yet unfinished

Warning

Make sure you run the following command before executing any of these commands listed in this demo.

source scripts/shell-setup.sh

Demo Setup

Run the create-demo.sh script to setup all the necessary aspects of the demo.

Here is an example command with the -i option which will install all pre-requisites (e.g. operators)

$DEMO_HOME/scripts/create-demo.sh -i

Demo Cleanup

To remove the demo, you can run the following command

$DEMO_HOME/scripts/cleanup.sh

If you want to uninstall all the supporting operators and CRDs, then you can run the cleanup command with the -f option

Troubleshooting

ArgoCD

Command Line

To trigger a sync and get verbose output

argocd app sync coolstore-argo --loglevel debug

To change details about an existing deployment use the argocd app set command

argocd app set coolstore-argo --directory-recurse=false

MySQL

You can test access to a MySQL database in an OpenShift cluster using the Adminer image.

  1. First, setup port forwarding to the service in question (assuming a petclinic based service as shown in the walkthrough)

    oc port-forward svc/petclinic-mysql 3306:3306
  2. Then, in another shell, run the Adminer image and have it port forward to 8080. NOTE: Assumes you are running on a Mac using Docker for Mac, this is where the docker.for.mac.localhost stuff comes from

    docker run -p 8080:8080 -e ADMINER_DEFAULT_SERVER=docker.for.mac.localhost adminer:latest
  3. From the Adminer web page, login as root (using whatever secret was used in the setup of the cluster). You can then run arbitrary commands. Here are the commands you can run to grant access to a user pc to a newly created petclinic database (from here)

    CREATE USER 'pc'@'%' IDENTIFIED BY 'petclinic';
    CREATE DATABASE petclinic;
    GRANT ALL PRIVILEGES ON petclinic.* TO 'pc'@'%';
    1. Or instead, you run SQL commands from the local command line

      oc run mysql-client --image=mysql:5.7 --restart=Never --rm=true --attach=true --wait=true \
          -- mysql -h petclinic-mysql -uroot -ppetclinic -e "CREATE USER 'pc'@'%' IDENTIFIED BY 'petclinic'; \
            CREATE DATABASE petclinic; \
            GRANT ALL PRIVILEGES ON petclinic.* TO 'pc'@'%';"

Troubleshooting Pipeline Tasks

General

If a pipeline fails and the logs are not enough to determine the problem, you can use the fact that every task maps to a pod to your advantage.

Let’s say that the task "unit-test" failed in a recent run.

  1. First look for the pod that represents that run

    $ oc get pods
    NAME                                                              READY   STATUS      RESTARTS   AGE
    petclinic-dev-pipeline-tomcat-dwjk4-checkout-vnp7v-pod-f8b5j      0/1     Completed   0          3m18s
    petclinic-dev-pipeline-tomcat-dwjk4-unit-tests-5pct2-pod-4gk46    0/1     Error       0          3m
    petclinic-dev-pipeline-tomcat-kpbx9-checkout-t78sr-pod-qnfrh      0/1     Error       0
  2. Then use the oc debug command to restart that pod to look around:

    $ oc debug po/petclinic-dev-pipeline-tomcat-dwjk4-unit-tests-5pct2-pod-4gk46
    Starting pod/petclinic-dev-pipeline-tomcat-dwjk4-unit-tests-5pct2-pod-4gk46-debug, command was: /tekton/tools/entrypoint -wait_file /tekton/downward/ready -wait_file_content -post_file /tekton/tools/0 -termination_path /tekton/termination -entrypoint ./mvnw -- -Dmaven.repo.local=/workspace/source/artefacts -s /var/config/settings.xml package
    If you don't see a command prompt, try pressing enter.
    sh-4.2$

Volume Issues

Sometimes pipelines fail to run because the workspace volume cannot be mounted. Looks like to root cause has to do with the underlying infra volume being deleted out from underneath a PersistentVolume. If you have pipelines that are timing out due to pods failing to run (usually you won’t get any log stream), take a look at the events on the pod and see if you notice these kind of mounting errors:

missing volume

This can usually be remedied by deleting the PVC, but finalizers keep PVCs from being deleted if a pod has a claim.

If you run into this issue, cancel the affected pipeline (otherwise the PVC won’t be able to be deleted) and either run the following command or see the additional details that follow

scripts/util-recreate-pvc.sh pipeline-source-pvc.yaml

To see all the claims on a PVC, look for the Mounted By section of the output of the following describe command (for pvc/maven-source-pvc):

oc describe pvc/maven-source-pvc

To delete all pods that have a claim on the pvc pvc/maven-source-pvc:

oc delete pods $(oc describe pvc/maven-source-pvc | grep "Mounted By" -A40 | sed "s/ //ig" | sed "s/MountedBy://ig")

Troubleshooting OpenShift Permissions

You can use the oc run command to run certain containers in a given project as a service account.

For instance, this command can be used to see what kind of permissions the builder service account has to view other projects (e.g. access to remote imagestreams)

oc run test3 --image=quay.io/openshift/origin-cli:latest --serviceaccount=builder -it --rm=true

Troubleshooting (Local) Tomcat Server

If the tomcat extension fails to run, you can attempt the following:

  1. remote any old tomcat files

    rm -f /opt/webserver/webse*
  2. Attempt to readd tomcat to /opt/webserver per the instructions above

  3. if that still doesn’t work, rebuild container.

  4. If all else fails, you can run the tomcat server locally.

OpenShift Nexus Installation

The $DEMO_HOME/scripts/create-cicd.sh will create a Nexus instance within the petclinic-cicd project and will configure the repo accordingly so that the application can be built appropriately. Should something go wrong, this section outlines steps that the script should have undertaken so that you can troubleshoot.

nexus maven public

The original petclinic app uses some repos outside of maven central. Namely:

Here’s how you would manually configure these in Nexus:

  1. Connect to the nexus instance (see route)

    echo "http://$(oc get route nexus -n petclinic-cicd -o jsonpath='{.spec.host}')/"
  2. Log into the nexus instance (standard nexus setup has admin, admin123)

  3. Go to Repositories and Create Repository for each of the repos needed

    nexus repositories

    1. Here’s example configuration for each of the above

      Spring Red Hat

  4. Add the two registries to the maven-public group as per the screenshot

    FIXME: This is necessary until every build gets a semantic version number update

  5. Update the maven-releases repo to allow updates like below:

    nexus repo allow redeploy

OpenShift Pipeline (Git) Triggers

Tekton allows for EventListeners, TriggerTemplates, and TriggerBindings to allow a git repo to hit a webhook and trigger a build. See also here. To get basic triggers going for both gogs and github run the following:

Note
For an example of triggers working with Tekton, see files in the template directory of this repo
Note
You may also want to consider this tekton dashboard functionality

YAML resources for the pipeline created for this demo can be found in these locations:

  1. Resources: $DEMO_HOME/kube/tekton/resources

  2. Triggers: $DEMO_HOME/kube/tekton/triggers

Triggered Pipeline Fails to Run

If the trigger doesn’t appear to fire, then check the logs of the pod that is running that represents the webhook. The probably is likely in the PipelineRun template.

Viewing (Extended) OpenShift Pipeline (Tekton) Logs

You can see limited logs in the Tekton UI, but if you want the full logs, you can access these from the command line using the tkn command

# Get the list of pipelineruns in the current project
tkn pipelinerun list

# Output the full logs of the named pipeline run (where petclinic-deploy-dev-run-j7ktj is a pipeline run name )
tkn pipelinerun logs petclinic-deploy-dev-run-j7ktj

To output the logs of a currently running pipelinerun (pr) and follow them, use:

tkn pr logs -L -f

Appendix

Still To Come

About

Repo demonstrating ArgoCD on OpenShift (with Tekton)


Languages

Language:Shell 91.8%Language:Jinja 5.8%Language:Dockerfile 2.4%