eraser-dev / eraser

🧹 Cleaning up images from Kubernetes nodes

Home Page:https://eraser-dev.github.io/eraser/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Parametrize Controller-Manager to pick Resources/Limits from values.yaml

ayushiaks opened this issue · comments

What steps did you take and what happened:
We enabled image-cleaner on AKS cluster
The eraser-controller-manager pod goes quickly CrashLoopBackOff

State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled

This is leading to bunch of eraser-aks pods in completed state not being cleaned up as controller isn't working.
What did you expect to happen:
the pod being stable with given limits

Anything else you would like to add:
I tried manually increasing memory limits and it works fine. It easily consumes more than 50Mi
Just like all other containers (scanner, remover etc) take parameterized resource requests/limits from values.yaml file, please make sure manager pod also has this capability.

We're blocked as we cannot use this helm chart without increasing memory.

Environment:

  • Eraser version: 1.2.1
  • Kubernetes version: (use kubectl version): 1.25.6

Hey @ayushiaks - I've got some follow up questions here

I'm a little confused as to how you're using Eraser. You say you enabled the Image Cleaner addon on an AKS cluster, but if that were the case, you wouldn't be using a Helm chart at all.

If you are in fact using the Helm chart and not the addon, the values can be configured with the .Values.deploy.resources value.

Thanks for pointing out, I was looking at the wrong yaml -> config/manager/manager.yaml
Closing the issue