Running mountebank within K8S issues
grahambunce opened this issue · comments
I'm trying to run mountebank as our service virtualization endpoint within our K8S cluster and hitting a few issues. I'm not sure of best practice in this scenario so some feedback would be helpful. We are looking to use mountebank to fake our services for testing performance and to isolate our dependencies.
We have created mountebank as a standalone K8S service, as follows: (this is from a Docker Desktop YAML file)
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: testing-services
name: virtualisationservice
labels:
app: virtualisationservice
spec:
minReadySeconds: 5
replicas: 1
selector:
matchLabels:
app: virtualisationservice
template:
metadata:
labels:
app: virtualisationservice
spec:
volumes:
- name: virtualisationservice-volume
hostPath:
path: /run/desktop/mnt/host/c/dev/testing/local-environment/mountebank/data
type: Directory
containers:
- name: virtualisationservice
image: bbyars/mountebank:latest
imagePullPolicy: Never
ports:
- containerPort: 2525
- containerPort: 8090
- containerPort: 8091
command: ["mb --configfile /app/data/testdata.json --datadir /app/data/mbdb"]
volumeMounts:
- name: virtualisationservice-volume
mountPath : "/app/data"
readOnly: false
---
This will not work because on start up of the pod we keep getting "startup" errors:
Error: failed to start container "virtualisationservice": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "mb --configfile /app/data/testdata.json --datadir /app/data/mbdb": stat mb --configfile /app/data/testdata.json --datadir /app/data/mbdb: no such file or directory: unknown
When we run the pod without the -- arguments the pod starts fine, we can exec into the pod and we can see the mapped /app/data volume fine but of course mountebank isn't configured with our imposters or stubs. We have quite a large dataset to manage - we thought about using curl but we're not sure this is practical and besides when the pods are started we want them to be available for use.
We tried mb restart --configfile xxxxx but that just killed the pod and restarted it, so is not appropriate.
We also tried this form of the YAML (probably the correct way of running a CMD with args tbh)
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: testing-services
name: virtualisationservice
labels:
app: virtualisationservice
spec:
minReadySeconds: 5
replicas: 1
selector:
matchLabels:
app: virtualisationservice
template:
metadata:
labels:
app: virtualisationservice
spec:
volumes:
- name: virtualisationservice-volume
hostPath:
path: /run/desktop/mnt/host/c/dev/local-environment/mountebank/data
type: Directory
containers:
- name: virtualisationservice
image: bbyars/mountebank:latest
imagePullPolicy: Never
ports:
- containerPort: 2525
- containerPort: 8090
- containerPort: 8091
command: ["mb"]
args: ["--debug", "--loglevel=debug", "--logfile /app/data/logs/mb.txt", "--configfile /app/data/testdata.json", "--datadir /app/data/mbdb"]
volumeMounts:
- name: virtualisationservice-volume
mountPath : "/app/data"
readOnly: false
and while this started up file the "args" were ignored so again MB was not configured with our imposters/stubs.
@bbyars I've moved forward on this. The issue was incorrect use of the args. The correct approach should be:
args: ["--debug", "--loglevel=debug", "--logfile", "/app/data/logs/mb.txt", "--configfile", "/app/data/testdata.json", "--datadir", "/app/data/mbdb"]