flerro / knative-minishift

Temp repo for testing knative with minishift

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Knative on Minishift

This is a tutorial to learn Knative on Minishift.

Minishift setup

  • Download the archive for your operating system from the Minishift v1.25.0 Release page and extract its contents

  • Copy the contents of the directory to your preferred location.

  • Add the minishift binary to your PATH environment variable.

Run the following command to verify your minishift is configured correctly:

# returns minishift v1.25.0+90fb23e
minishift version
#!/bin/bash

# make sure the profile is set correctly
minishift profile set knative

# Pinning to the right needed OpenShift version in this case v3.11.0
minishift config set openshift-version v3.11.0

# memory for the vm
minishift config set memory 8GB

# the vCpus for the vm
minishift config set cpus 4

# extra disk size for the vm
minishift config set disk-size 50g

# caching the images that will be downloaded during app deployments
minishift config set image-caching true

# Add new user called admin with password with role cluster-admin
minishift addons enable admin-user

# Allow the containers to be run with uid 0
minishift addons enable anyuid

# Start minishift
minishift start

eval $(minishift docker-env) && eval $(minishift oc-env)

Enable Admission Controller Hooks

#!/bin/bash

# Enable admission controller webhooks
# The configuration stanzas below look weird and are just to workaround
# https://bugzilla.redhat.com/show_bug.cgi?id=1635918
minishift openshift config set --target=kube --patch '{
    "admissionConfig": {
        "pluginConfig": {
            "ValidatingAdmissionWebhook": {
                "configuration": {
                    "apiVersion": "apiserver.config.k8s.io/v1alpha1",
                    "kind": "WebhookAdmission",
                    "kubeConfigFile": "/dev/null"
                }
            },
            "MutatingAdmissionWebhook": {
                "configuration": {
                    "apiVersion": "apiserver.config.k8s.io/v1alpha1",
                    "kind": "WebhookAdmission",
                    "kubeConfigFile": "/dev/null"
                }
            }
        }
    }
}'
  1. wait for some time after this step to allow OpenShift to be restarted automatically. e.g. you can try doing oc login -u admin -p admin until you are able to login again.

Pre-requisites

SCCs (Security Context Constraints) are the precursor to the PSP (Pod Security Policy) mechanism in Kubernetes.

oc project myproject
# Set privileged and anyuid scc to default SA in myproject
oc adm policy add-scc-to-user privileged -z default -n myproject
oc adm policy add-scc-to-user anyuid -z default -n myproject

Knative Deployment

Istio

This tutorial uses the Red Hat OpenShift Service Mesh to get Istio up and running, which includes Jaeger and Kiali.

You need to set the system limits on mmap counts otherwise Elasticsearch will fail. Jaeger uses Elasticsearch and it is part of the Red Hat OpenShift Service Mesh. Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions. To fix this you have to login into the minishift instance and do the following

$ minishift ssh
[docker@istio-tutorial ~]$ sudo -i
[root@istio-tutorial ~]$ echo "vm.max_map_count = 262144" > /etc/sysctl.d/99-elasticsearch.conf
[root@istio-tutorial ~]$ sysctl vm.max_map_count=262144
[root@istio-tutorial ~]$ exit
logout
[docker@istio-tutorial ~]$ exit
logout

Now your system is prepared and we can install Istio operator

#!/bin/bash

oc new-project istio-operator
oc new-app -f https://raw.githubusercontent.com/Maistra/openshift-ansible/maistra-0.3/istio/istio_community_operator_template.yaml --param=OPENSHIFT_ISTIO_MASTER_PUBLIC_URL="https://$(minishift ip):8443"

Wait for the operator to be ready

$ oc get pods -w -n istio-operator

NAME                              READY     STATUS    RESTARTS   AGE
istio-operator-58c45d8f64-jdsrz   1/1       Running   0          10s

Once operator is ready, create an installation for Istio

#!/bin/bash
oc project istio-operator
cat <<EOF >>/tmp/istio-cr.yaml
apiVersion: "istio.openshift.com/v1alpha1"
kind: "Installation"
metadata:
  name: "istio-installation"
spec:
  deployment_type: origin
  istio:
    authentication: false
    community: true
    prefix: maistra/
    version: 0.3.0
  jaeger:
    prefix: jaegertracing/
    version: 1.7.0
    elasticsearch_memory: 2Gi
  kiali:
    username: kialiadmin
    password: kialiadmin
    prefix: kiali/
    version: v0.8.1
EOF
oc create -f /tmp/istio-cr.yaml
Note
if you want mutual TLS enabled by default, change the parameter authentication to true.

Wait for Istio’s components to be ready

$ oc get pods -n istio-system

NAME                                          READY     STATUS      RESTARTS   AGE
elasticsearch-0                               1/1       Running     0          3m
grafana-648c7d5cc6-d4cgr                      1/1       Running     0          3m
istio-citadel-64f86c964-vjd6f                 1/1       Running     0          5m
istio-egressgateway-8579f6f649-zwmqh          1/1       Running     0          5m
istio-galley-569c79fcbf-rc24l                 1/1       Running     0          5m
istio-ingressgateway-5457546784-gct2p         1/1       Running     0          5m
istio-pilot-78d8f7465f-kmclw                  2/2       Running     0          5m
istio-policy-b57648f9f-cvj82                  2/2       Running     0          5m
istio-sidecar-injector-5876f696f-s2pdt        1/1       Running     0          5m
istio-statsd-prom-bridge-549d687fd9-g2gmc     1/1       Running     0          5m
istio-telemetry-56587b8fb6-wpg9k              2/2       Running     0          5m
jaeger-agent-6qgrl                            1/1       Running     0          3m
jaeger-collector-9cbd5f46c-kt9w9              1/1       Running     0          3m
jaeger-query-6678967b5-8sgdp                  1/1       Running     0          3m
kiali-6b8c686d9b-mkxdv                        1/1       Running     0          2m
openshift-ansible-istio-installer-job-rj7pj   0/1       Completed   0          7m
prometheus-6ffc56584f-zgbv9                   1/1       Running     0          5m
Important
The Istio sidecar injection mechanism that comes with Red Hat Service Mesh allows you to automatically inject the sidecar to your services by adding sidecar.istio.io/inject: "true" annotation to your service. See Bookinfo as an example how to achieve this.

Knative Serving

Knative Serving supports deploying of serverless functions and applications on Kubernetes.

#/bin/bash

# Grant the necessary privileges to the service accounts Knative will use:
oc adm policy add-scc-to-user anyuid -z build-controller -n knative-build
oc adm policy add-scc-to-user anyuid -z controller -n knative-serving
oc adm policy add-scc-to-user anyuid -z autoscaler -n knative-serving
oc adm policy add-cluster-role-to-user cluster-admin -z build-controller -n knative-build
oc adm policy add-cluster-role-to-user cluster-admin -z controller -n knative-serving

# Deploy Knative serving
curl -L https://storage.googleapis.com/knative-releases/serving/latest/release-no-mon.yaml \
  | sed 's/docker.io\/istio\/.*$/maistra\/proxyv2-centos7:0.3.0/' \
  | sed 's/istio-pilot:8080/istio-pilot.istio-system:15005/' \
  | oc apply -f -
  1. This will setup the required OpenShift security policies that are required to deploy and make Knative functional

Wait until all the pods in the knative-serving are up and running, you can verify it with the command oc get pods -n knative-serving -w and oc get pods -n knative-build -w.

Tip

Add the minishift ingress CIDR to the OS routing table to allow calling Knative services using LoadBalancer IP:

# Only for macOS
sudo route -n add -net $(minishift openshift config view | grep ingressIPNetworkCIDR | awk '{print $NF}') $(minishift ip)

# Only for Linux
sudo ip route add $(minishift openshift config view | grep ingressIPNetworkCIDR | sed 's/\r$//' | awk '{print $NF}') via $(minishift ip)

App Deployment

oc project myproject

export IP_ADDRESS=$(oc get svc knative-ingressgateway -n istio-system -o 'jsonpath={.status.loadBalancer.ingress[0].ip}')

echo '
apiVersion: serving.knative.dev/v1alpha1 # Current version of Knative
kind: Service
metadata:
  name: helloworld-go # The name of the app
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
            env:
            - name: TARGET # The environment variable printed out by the sample app
              value: "Go Sample v1"
' | oc create -f -

# Wait for the hello pod to enter its `Running` state
oc get pod --watch

# This should output 'Hello World: Go Sample v1!'
curl -H "Host: helloworld-go.myproject.example.com" http://$IP_ADDRESS

The curl above should return "Hello World: Go Sample v1!".

If you’d like to view the available sample apps and deploy one of your choosing, head to the sample apps repo.

Clean up

kubectl delete configurations.serving.knative.dev --all
kubectl delete revisions.serving.knative.dev --all
kubectl delete routes.serving.knative.dev --all
kubectl delete services.serving.knative.dev --all

(or)

kubectl delete all --all -n myproject

About

Temp repo for testing knative with minishift


Languages

Language:Shell 100.0%