pilillo / spark-ui-controller

A Namespaced K8s controller exposing a route for any driver-svc binding to the Spark UI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Spark UI Controller

A Namespaced K8s controller exposing a route for any driver-svc binding to the Spark UI.

Usage

Create a new role and a new SA, then bind them and start the controller as deployment:

oc apply -f role.yaml
oc apply -f service_account.yaml
oc apply -f role_binding.yaml
oc apply -f spark-ui-controller-deployment.yaml

Done!

Development

Installing the operator-sdk CLI

export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(uname -m) ;; esac)
export OS=$(uname | awk '{print tolower($0)}')
export OPERATOR_SDK_DL_URL=https://github.com/operator-framework/operator-sdk/releases/download/v1.13.0
curl -LO ${OPERATOR_SDK_DL_URL}/operator-sdk_${OS}_${ARCH}
chmod +x operator-sdk_${OS}_${ARCH} && sudo mv operator-sdk_${OS}_${ARCH} /usr/local/bin/operator-sdk

Initiating a new controller project

Firstly, initiate a new project, under a domain and specify the repo for the go module:

export GO111MODULE=on
operator-sdk init \
--domain=github.com/pilillo \
--repo=github.com/pilillo/spark-ui-controller \
--license apache2 \
--skip-go-version-check \
--verbose

Notice that the last two flags can be omitted.

The project layout for Go-based operators is described here.

Let's create a controller type:

operator-sdk create api --group=core --version=v1 --kind=Service --controller=true --resource=false

As opposed to creating a controller, we do not add any CRD. Therefore, there is no api folder being added. However, if you have a look at the Dockerfile and the Makefile, most scripts expect that. As a workaround, add an empty api folder with:

mkdir api

Install the openshift/api project to manage route resource types:

go get -u github.com/openshift/api

Test the controller

You can run the controller on your target cluster, as defined in ~/.kube/config:

make run

This clearly works with openshift either. Make sure you are in the right context and project to avoid surprises.

Build the controller

Again, please have a look at the Makefile:

docker-build: test ## Build docker image with the manager.
	docker build -t ${IMG} .

docker-push: ## Push docker image with the manager.
	docker push ${IMG}

Therefore:

export IMG=pilillo/spark-ui-controller:v0.0.1
make docker-build

Push the controller as docker image

Use the makefile:

make docker-push

Unless a different repo is specified in the IMG variable, the docker image will end up on docker hub. For instance:

$ make docker-push
docker push pilillo/spark-ui-controller:v0.0.1
The push refers to repository [docker.io/pilillo/spark-ui-controller]
23b8cccb6fce: Pushing [=============================================>     ]  41.78MB/46.06MB
c0d270ab7e0d: Pushing [==================================================>]  3.697MB

Your controller is now available on Dockerhub or your private repo!

Bundle the controller to use it with the Operator Lifecycle manager

You can use the operator sdk to build a bundle format that can be used by the operator lifecycle manager (OLM). See the official documentation here.

Specifically, you can use the makefile as follows:

make bundle

which calls the commands generate bundle and bundle validate:

bundle: manifests kustomize ## Generate bundle manifests and metadata, then validate generated files.
	operator-sdk generate kustomize manifests -q
	cd config/manager && $(KUSTOMIZE) edit set image controller=$(IMG)
	$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)
	operator-sdk bundle validate ./bundle

This creates:

  • a bundle manifests directory at bundle/manifests containing a ClusterServiceVersion object
  • a bundle metadata directory at bundle/metadata
  • all custom resource definitions (CRDs) at config/crd
  • a Dockerfile named bundle.Dockerfile

Once done, you can again use the Makefile to build and push the bundle:

.PHONY: bundle-build
bundle-build: ## Build the bundle image.
	docker build -f bundle.Dockerfile -t $(BUNDLE_IMG) .

.PHONY: bundle-push
bundle-push: ## Push the bundle image.
	$(MAKE) docker-push IMG=$(BUNDLE_IMG)

Once pushed, the bundle can be deployed with:

operator-sdk run bundle \
    [-n <namespace>] \
    <registry>/<user>/<bundle_image_name>:<tag>

References

About

A Namespaced K8s controller exposing a route for any driver-svc binding to the Spark UI


Languages

Language:Go 50.9%Language:Makefile 41.4%Language:Dockerfile 7.8%