mciverza / hybrid-cloud

Hybrid Cloud Demo using OpenShift on Multiple Clouds (Public/Private)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Hybrid Cloud

badge badge

Tested with skupper 0.2.0

TL;DR

Install Skupper

# Linux
curl -fL https://github.com/skupperproject/skupper-cli/releases/download/0.2.0/skupper-cli-0.2.0-linux-amd64.tgz | tar -xzf -

# MacOs
curl -fL https://github.com/skupperproject/skupper-cli/releases/download/0.2.0/skupper-cli-0.2.0-mac-amd64.tgz | tar -xzf -

Deploy Services

Important
If you are using minikube as one cluster, run minikube tunnel in a new terminal:

Deploy the backend to all your clusters:

kubectl apply -f backend-{azr|aws|gcp|minikube}.yml

Then you need to annotate the backend service so it is skupper-aware.

kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http

Deploy the frontend to your main cluster:

The Kubernetes service is of type LoadBalancer. If you are deploying the frontend in your public cluster open frontend.yml file and modify Ingress configuration with your host:

spec:
  rules:
  - host: ""
Tip
In case of OpenShift you can run: oc expose service hybrid-cloud-frontend after deploying frontend resource, and it is not required to modify the Ingress configuration. But of course, the first approach works as well in OpenShift.

Deploy the frontend by calling:

kubectl apply -f frontend.yml

In your main cluster, init skupper and create the connection-token:

skupper init --console-auth unsecured # (1)

Skupper is now installed in namespace 'default'.  Use 'skupper status' to get more information.

skupper status

Skupper is installed in namespace '"default"'. Status pending...
  1. This makes anyone be able to access the Skupper UI to visualize the clouds. Fine for demos, not to be used in production.

See the status of the skupper pods. It takes a bit of time (usually around 2 minutes) until the pods are running:

kubectl get pods

NAME                                        READY   STATUS    RESTARTS   AGE
hybrid-cloud-backend-5948955b7-m5dzk        1/1     Running   0          16m
hybrid-cloud-backend-proxy-89f44c75-vx9kn   1/1     Running   0          51s
hybrid-cloud-frontend-7c85f9dfff-fmbd6      1/1     Running   0          10m
skupper-proxy-controller-77f6c568cc-f8xxd   1/1     Running   0          102s
skupper-router-7976948d9f-hn6ms             1/1     Running   0          104s

Finally create a token:

skupper connection-token token.yaml

Connection token written to token.yaml

In all the other clusters, use the connection token created in the previous step:

skupper init
skupper connect token.yaml

Everything is connected and ready to be used. This has been the short-version to get started, continue reading if you want to learn how to build the Docker images, deply them , etc.

Skupper UI

If you run:

kubectl get services

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                                  PORT(S)               AGE
hybrid-cloud-backend    ClusterIP      172.30.32.251    <none>                                                                       8080/TCP              40m
hybrid-cloud-frontend   LoadBalancer   172.30.25.65     add076df5798711eaad1a0241cddbab7-1371911574.eu-central-1.elb.amazonaws.com   8080:32647/TCP        39m
kubernetes              ClusterIP      172.30.0.1       <none>                                                                       443/TCP               71m
openshift               ExternalName   <none>           kubernetes.default.svc.cluster.local                                         <none>                70m
skupper-controller      ClusterIP      172.30.247.104   <none>                                                                       8080/TCP              34m
skupper-internal        ClusterIP      172.30.109.84    <none>                                                                       55671/TCP,45671/TCP   34m
skupper-messaging       ClusterIP      172.30.64.245    <none>                                                                       5671/TCP              34m

You’ll notice that there is a skupper-controller service which is the entry point for the Skupper UI. Expose this service so it is reachable from outside the cluster and you’ll be able to access the Skupper UI.

Services

Backend

If you want to build, push and deploy the service:

cd backend
./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure

If service is already pushed in quay.io, so you can skip the push part:

cd backend

./mvnw clean package -DskipTests -Pazure -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=false

Frontend

If you want to build, push and deploy the service:

cd backend
./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure -Dquarkus.kubernetes.host=<your_public_host>

If service is already pushed in quay.io, so you can skip the push part:

cd backend

./mvnw clean package -DskipTests -Pazr -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=false

Cloud Providers

The next profiles are provided: -Pazr, -Paws, -Pgcp and -Plocal, this just sets an environment variable to identify the cluster.

Setting up Skupper

Make sure you have a least the backend project deployed on 2 different clusters. The frontend project can be deployed to just one cluster.

Here, we will make the assumption that we have it deployed in a local cluster local and a public cluster public.

Make sure to have 2 terminals with separate sessions logged into each of your cluster with the correct namespace context (but within the same folder).

Install the Skupper CLI

Follow the instructions provided here.

Skupper setup

  1. In your public terminal session :

skupper init --id public
skupper connection-token private-to-public.yaml
  1. In your local terminal session :

skupper init --id private
skupper connect private-to-public.yaml

Annotate the services to join to the Virtual Application Network

  1. In the terminal for the local cluster, annotate the hybrid-cloud-backend service:

kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http
  1. In the terminal for the public cluster, annotate the hybrid-cloud-backend service:

kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http

Both services are now connected, if you scale one to 0 or it gets overloaded it will transparently load-balance to the other cluster.

About

Hybrid Cloud Demo using OpenShift on Multiple Clouds (Public/Private)

License:Apache License 2.0


Languages

Language:Java 33.8%Language:JavaScript 32.3%Language:HTML 17.3%Language:CSS 16.7%