Setup Staging Environment
dennyabrain opened this issue · comments
Denny George commented
Overview
- Cleanup existing infrastructure
- Setup cluster for tattle using eksctl
- Setup NodeGroups for web server, workers, single node elasticsearch, single node rabbitmq, single node postgres
- Create workflow for bring up and destroy resources used by staging environment (all node groups)
- Create k8 manifests for the project to deploy the staging environment
Denny George commented
Prior Cleanup
- Deleted unused EC2s and associated resources in the us-east region
- Deleted the old sql database. A snapshot of which was taken and is saved. We can always restore the db from it but i doubt we'll need to. this has data from old kosh and annotation UI (which we already backed up in csvs and released)
- Deleted existing k8 cluster.
I am posting some stats of the current VPC and EC2 dashboard for the ap-south-1 region. This is for my future clean up reference as we bring resources up and down.
EC2 dashboard
VPC dashboard
Denny George commented
successfully brought an managed kubernetes cluster on our aws. Keeping the eks commands handy for later
eksctl create cluster \
--name sandbox \
--version 1.29 \
--region ap-south-1 \
--nodegroup-name generalpurpose \
--node-type t2.micro \
--nodes 2 \
--vpc-public-subnets=subnet-ID1-MAKED,subnet-ID1-MAKED
eksctl delete cluster --region=ap-south-1 --name=sandbox
Untested but you should be able to create new node groups to the cluster using this command
eksctl create nodegroup --cluster=<clusterName> --region=<region> --name=<newNodeGroupName> --managed=false
Denny George commented
eksctl manifest file :
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: sandbox
region: ap-south-1
version: "1.29"
vpc:
subnets:
public:
ap-south-1a:
id: "subnet-MASKED"
ap-south-1b:
id: "subnet-MASKED"
nodeGroups:
- name: generalpurpose
instanceType: t2.micro
desiredCapacity: 1
- name: c7gxlarge
instanceType: c7g.xlarge
desiredCapacity: 1
labels:
node-class: "c7gxlarge"
ssh:
publicKeyPath: ~/.ssh/id_rsa.pub
- name: c7g4xlarge
instanceType: c7g.4xlarge
desiredCapacity: 1
labels:
node-class: "c7g4xlarge"
ssh:
publicKeyPath: ~/.ssh/id_rsa.pub
k8 manifest file for deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: feluda-operator-vidvec
labels:
app.kubernetes.io/name: feluda-operator-vidvec
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: feluda-operator-vidvec
template:
metadata:
labels:
app.kubernetes.io/name: feluda-operator-vidvec
spec:
nodeSelector:
node-class: "c7gxlarge"
containers:
- name: feluda-operator-vidvec
image: tattletech/feluda-operator-vidvec:b4c4eca
imagePullPolicy: Always
command: ["tails","-f","/dev/null"]
resources:
requests:
cpu: "1000m"
memory: "4000Mi"
limits:
cpu: "4000m"
memory: "8000Mi"
Denny George commented